00:00:00.001 Started by upstream project "autotest-per-patch" build number 132860 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.018 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.019 The recommended git tool is: git 00:00:00.019 using credential 00000000-0000-0000-0000-000000000002 00:00:00.020 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.045 Fetching changes from the remote Git repository 00:00:00.047 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.070 Using shallow fetch with depth 1 00:00:00.070 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.070 > git --version # timeout=10 00:00:00.087 > git --version # 'git version 2.39.2' 00:00:00.087 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.108 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.108 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.404 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.414 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.426 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.426 > git config core.sparsecheckout # timeout=10 00:00:06.435 > git read-tree -mu HEAD # timeout=10 00:00:06.449 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.464 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.464 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.573 [Pipeline] Start of Pipeline 00:00:06.584 [Pipeline] library 00:00:06.586 Loading library shm_lib@master 00:00:06.586 Library shm_lib@master is cached. Copying from home. 00:00:06.597 [Pipeline] node 00:00:06.613 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:06.614 [Pipeline] { 00:00:06.620 [Pipeline] catchError 00:00:06.621 [Pipeline] { 00:00:06.630 [Pipeline] wrap 00:00:06.635 [Pipeline] { 00:00:06.640 [Pipeline] stage 00:00:06.641 [Pipeline] { (Prologue) 00:00:06.843 [Pipeline] sh 00:00:07.124 + logger -p user.info -t JENKINS-CI 00:00:07.145 [Pipeline] echo 00:00:07.147 Node: WFP20 00:00:07.154 [Pipeline] sh 00:00:07.452 [Pipeline] setCustomBuildProperty 00:00:07.461 [Pipeline] echo 00:00:07.463 Cleanup processes 00:00:07.468 [Pipeline] sh 00:00:07.748 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:07.748 854065 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:07.761 [Pipeline] sh 00:00:08.049 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:08.049 ++ grep -v 'sudo pgrep' 00:00:08.049 ++ awk '{print $1}' 00:00:08.049 + sudo kill -9 00:00:08.049 + true 00:00:08.062 [Pipeline] cleanWs 00:00:08.071 [WS-CLEANUP] Deleting project workspace... 00:00:08.071 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.077 [WS-CLEANUP] done 00:00:08.081 [Pipeline] setCustomBuildProperty 00:00:08.094 [Pipeline] sh 00:00:08.376 + sudo git config --global --replace-all safe.directory '*' 00:00:08.512 [Pipeline] httpRequest 00:00:08.801 [Pipeline] echo 00:00:08.803 Sorcerer 10.211.164.20 is alive 00:00:08.813 [Pipeline] retry 00:00:08.815 [Pipeline] { 00:00:08.830 [Pipeline] httpRequest 00:00:08.834 HttpMethod: GET 00:00:08.834 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.835 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.856 Response Code: HTTP/1.1 200 OK 00:00:08.857 Success: Status code 200 is in the accepted range: 200,404 00:00:08.857 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:27.669 [Pipeline] } 00:00:27.686 [Pipeline] // retry 00:00:27.693 [Pipeline] sh 00:00:27.976 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:27.992 [Pipeline] httpRequest 00:00:28.406 [Pipeline] echo 00:00:28.408 Sorcerer 10.211.164.20 is alive 00:00:28.418 [Pipeline] retry 00:00:28.420 [Pipeline] { 00:00:28.433 [Pipeline] httpRequest 00:00:28.438 HttpMethod: GET 00:00:28.438 URL: http://10.211.164.20/packages/spdk_a393e5e6e04dd3af2fc437407309fc764ad2659e.tar.gz 00:00:28.439 Sending request to url: http://10.211.164.20/packages/spdk_a393e5e6e04dd3af2fc437407309fc764ad2659e.tar.gz 00:00:28.449 Response Code: HTTP/1.1 200 OK 00:00:28.450 Success: Status code 200 is in the accepted range: 200,404 00:00:28.450 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_a393e5e6e04dd3af2fc437407309fc764ad2659e.tar.gz 00:01:20.317 [Pipeline] } 00:01:20.335 [Pipeline] // retry 00:01:20.342 [Pipeline] sh 00:01:20.631 + tar --no-same-owner -xf spdk_a393e5e6e04dd3af2fc437407309fc764ad2659e.tar.gz 00:01:23.171 [Pipeline] sh 00:01:23.452 + git -C spdk log --oneline -n5 00:01:23.452 a393e5e6e [TEST] 00:01:23.452 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:23.452 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:23.452 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:23.452 66289a6db build: use VERSION file for storing version 00:01:23.463 [Pipeline] } 00:01:23.476 [Pipeline] // stage 00:01:23.485 [Pipeline] stage 00:01:23.487 [Pipeline] { (Prepare) 00:01:23.502 [Pipeline] writeFile 00:01:23.517 [Pipeline] sh 00:01:23.799 + logger -p user.info -t JENKINS-CI 00:01:23.810 [Pipeline] sh 00:01:24.095 + logger -p user.info -t JENKINS-CI 00:01:24.106 [Pipeline] sh 00:01:24.389 + cat autorun-spdk.conf 00:01:24.389 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.389 SPDK_TEST_FUZZER_SHORT=1 00:01:24.389 SPDK_TEST_FUZZER=1 00:01:24.389 SPDK_TEST_SETUP=1 00:01:24.389 SPDK_RUN_UBSAN=1 00:01:24.395 RUN_NIGHTLY=0 00:01:24.400 [Pipeline] readFile 00:01:24.421 [Pipeline] withEnv 00:01:24.423 [Pipeline] { 00:01:24.432 [Pipeline] sh 00:01:24.713 + set -ex 00:01:24.713 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:24.713 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:24.713 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.713 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:24.713 ++ SPDK_TEST_FUZZER=1 00:01:24.713 ++ SPDK_TEST_SETUP=1 00:01:24.713 ++ SPDK_RUN_UBSAN=1 00:01:24.713 ++ RUN_NIGHTLY=0 00:01:24.713 + case $SPDK_TEST_NVMF_NICS in 00:01:24.713 + DRIVERS= 00:01:24.713 + [[ -n '' ]] 00:01:24.713 + exit 0 00:01:24.722 [Pipeline] } 00:01:24.736 [Pipeline] // withEnv 00:01:24.740 [Pipeline] } 00:01:24.753 [Pipeline] // stage 00:01:24.762 [Pipeline] catchError 00:01:24.764 [Pipeline] { 00:01:24.777 [Pipeline] timeout 00:01:24.777 Timeout set to expire in 30 min 00:01:24.779 [Pipeline] { 00:01:24.791 [Pipeline] stage 00:01:24.793 [Pipeline] { (Tests) 00:01:24.805 [Pipeline] sh 00:01:25.088 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:25.088 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:25.088 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:25.088 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:25.088 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:25.088 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:25.088 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:25.088 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:25.088 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:25.088 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:25.088 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:25.088 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:25.088 + source /etc/os-release 00:01:25.088 ++ NAME='Fedora Linux' 00:01:25.088 ++ VERSION='39 (Cloud Edition)' 00:01:25.088 ++ ID=fedora 00:01:25.088 ++ VERSION_ID=39 00:01:25.088 ++ VERSION_CODENAME= 00:01:25.088 ++ PLATFORM_ID=platform:f39 00:01:25.088 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:25.088 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:25.088 ++ LOGO=fedora-logo-icon 00:01:25.088 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:25.088 ++ HOME_URL=https://fedoraproject.org/ 00:01:25.088 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:25.088 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:25.088 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:25.088 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:25.088 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:25.088 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:25.088 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:25.088 ++ SUPPORT_END=2024-11-12 00:01:25.088 ++ VARIANT='Cloud Edition' 00:01:25.088 ++ VARIANT_ID=cloud 00:01:25.088 + uname -a 00:01:25.088 Linux spdk-wfp-20 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:25.088 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:28.374 Hugepages 00:01:28.374 node hugesize free / total 00:01:28.374 node0 1048576kB 0 / 0 00:01:28.374 node0 2048kB 0 / 0 00:01:28.374 node1 1048576kB 0 / 0 00:01:28.374 node1 2048kB 0 / 0 00:01:28.374 00:01:28.374 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:28.374 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:28.374 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:28.374 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:28.374 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:28.374 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:28.374 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:28.374 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:28.374 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:28.374 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:28.374 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:28.374 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:28.374 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:28.374 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:28.374 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:28.374 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:28.374 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:28.374 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:28.374 + rm -f /tmp/spdk-ld-path 00:01:28.374 + source autorun-spdk.conf 00:01:28.374 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.374 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:28.374 ++ SPDK_TEST_FUZZER=1 00:01:28.374 ++ SPDK_TEST_SETUP=1 00:01:28.374 ++ SPDK_RUN_UBSAN=1 00:01:28.374 ++ RUN_NIGHTLY=0 00:01:28.374 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:28.374 + [[ -n '' ]] 00:01:28.374 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:28.374 + for M in /var/spdk/build-*-manifest.txt 00:01:28.374 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:28.374 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:28.374 + for M in /var/spdk/build-*-manifest.txt 00:01:28.374 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:28.374 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:28.374 + for M in /var/spdk/build-*-manifest.txt 00:01:28.374 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:28.374 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:28.374 ++ uname 00:01:28.374 + [[ Linux == \L\i\n\u\x ]] 00:01:28.374 + sudo dmesg -T 00:01:28.374 + sudo dmesg --clear 00:01:28.374 + dmesg_pid=855537 00:01:28.374 + [[ Fedora Linux == FreeBSD ]] 00:01:28.374 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.374 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.374 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:28.374 + [[ -x /usr/src/fio-static/fio ]] 00:01:28.374 + export FIO_BIN=/usr/src/fio-static/fio 00:01:28.374 + FIO_BIN=/usr/src/fio-static/fio 00:01:28.374 + sudo dmesg -Tw 00:01:28.374 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:28.374 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:28.374 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:28.374 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.374 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.374 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:28.374 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.374 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.374 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:28.374 12:21:33 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:28.374 12:21:33 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:28.374 12:21:33 -- short-fuzz-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.374 12:21:33 -- short-fuzz-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_FUZZER_SHORT=1 00:01:28.374 12:21:33 -- short-fuzz-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_FUZZER=1 00:01:28.374 12:21:33 -- short-fuzz-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_SETUP=1 00:01:28.374 12:21:33 -- short-fuzz-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:01:28.374 12:21:33 -- short-fuzz-phy-autotest/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:28.374 12:21:33 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:28.374 12:21:33 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:28.374 12:21:33 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:28.374 12:21:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:28.374 12:21:33 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:28.374 12:21:33 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:28.374 12:21:33 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:28.374 12:21:33 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:28.374 12:21:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.374 12:21:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.374 12:21:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.374 12:21:33 -- paths/export.sh@5 -- $ export PATH 00:01:28.374 12:21:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.374 12:21:33 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:28.375 12:21:33 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:28.375 12:21:33 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734348093.XXXXXX 00:01:28.375 12:21:33 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734348093.SpbMPQ 00:01:28.375 12:21:33 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:28.375 12:21:33 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:28.375 12:21:33 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:28.375 12:21:33 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:28.375 12:21:33 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:28.375 12:21:33 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:28.375 12:21:33 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:28.375 12:21:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.375 12:21:33 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:28.375 12:21:33 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:28.375 12:21:33 -- pm/common@17 -- $ local monitor 00:01:28.375 12:21:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.375 12:21:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.375 12:21:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.375 12:21:33 -- pm/common@21 -- $ date +%s 00:01:28.375 12:21:33 -- pm/common@21 -- $ date +%s 00:01:28.375 12:21:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.375 12:21:33 -- pm/common@25 -- $ sleep 1 00:01:28.375 12:21:33 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734348093 00:01:28.375 12:21:33 -- pm/common@21 -- $ date +%s 00:01:28.375 12:21:33 -- pm/common@21 -- $ date +%s 00:01:28.375 12:21:33 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734348093 00:01:28.375 12:21:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734348093 00:01:28.375 12:21:33 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734348093 00:01:28.635 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734348093_collect-vmstat.pm.log 00:01:28.635 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734348093_collect-cpu-load.pm.log 00:01:28.635 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734348093_collect-cpu-temp.pm.log 00:01:28.635 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734348093_collect-bmc-pm.bmc.pm.log 00:01:29.643 12:21:34 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:29.643 12:21:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:29.643 12:21:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:29.643 12:21:34 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:29.643 12:21:34 -- spdk/autobuild.sh@16 -- $ date -u 00:01:29.643 Mon Dec 16 11:21:34 AM UTC 2024 00:01:29.643 12:21:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:29.643 v25.01-rc1-3-ga393e5e6e 00:01:29.643 12:21:34 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:29.643 12:21:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:29.643 12:21:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:29.643 12:21:34 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:29.643 12:21:34 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:29.643 12:21:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.643 ************************************ 00:01:29.643 START TEST ubsan 00:01:29.643 ************************************ 00:01:29.643 12:21:34 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:29.643 using ubsan 00:01:29.643 00:01:29.643 real 0m0.001s 00:01:29.643 user 0m0.000s 00:01:29.643 sys 0m0.000s 00:01:29.643 12:21:34 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:29.643 12:21:34 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.643 ************************************ 00:01:29.643 END TEST ubsan 00:01:29.643 ************************************ 00:01:29.643 12:21:35 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:29.643 12:21:35 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:29.643 12:21:35 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:29.643 12:21:35 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:29.643 12:21:35 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:29.643 12:21:35 -- common/autobuild_common.sh@445 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:29.643 12:21:35 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:29.643 12:21:35 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:29.643 12:21:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.643 ************************************ 00:01:29.643 START TEST autobuild_llvm_precompile 00:01:29.643 ************************************ 00:01:29.643 12:21:35 autobuild_llvm_precompile -- common/autotest_common.sh@1129 -- $ _llvm_precompile 00:01:29.643 12:21:35 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:01:29.643 12:21:35 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:01:29.643 Target: x86_64-redhat-linux-gnu 00:01:29.643 Thread model: posix 00:01:29.643 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:29.643 12:21:35 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:01:29.643 12:21:35 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:01:29.643 12:21:35 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:01:29.643 12:21:35 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:01:29.643 12:21:35 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:01:29.644 12:21:35 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:29.644 12:21:35 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:29.644 12:21:35 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:01:29.644 12:21:35 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:01:29.644 12:21:35 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:29.904 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:29.904 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:30.163 Using 'verbs' RDMA provider 00:01:46.434 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:58.645 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:58.645 Creating mk/config.mk...done. 00:01:58.645 Creating mk/cc.flags.mk...done. 00:01:58.645 Type 'make' to build. 00:01:58.645 00:01:58.645 real 0m29.028s 00:01:58.645 user 0m12.878s 00:01:58.645 sys 0m15.521s 00:01:58.645 12:22:04 autobuild_llvm_precompile -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:58.645 12:22:04 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:58.645 ************************************ 00:01:58.645 END TEST autobuild_llvm_precompile 00:01:58.645 ************************************ 00:01:58.645 12:22:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:58.645 12:22:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:58.646 12:22:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:58.646 12:22:04 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:58.646 12:22:04 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:58.905 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:58.905 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:59.472 Using 'verbs' RDMA provider 00:02:12.609 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:22.610 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:23.438 Creating mk/config.mk...done. 00:02:23.438 Creating mk/cc.flags.mk...done. 00:02:23.438 Type 'make' to build. 00:02:23.438 12:22:28 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:02:23.438 12:22:28 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:23.438 12:22:28 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:23.438 12:22:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.438 ************************************ 00:02:23.438 START TEST make 00:02:23.438 ************************************ 00:02:23.438 12:22:28 make -- common/autotest_common.sh@1129 -- $ make -j112 00:02:25.356 The Meson build system 00:02:25.356 Version: 1.5.0 00:02:25.356 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:02:25.356 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:25.356 Build type: native build 00:02:25.356 Project name: libvfio-user 00:02:25.356 Project version: 0.0.1 00:02:25.356 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:25.356 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:25.356 Host machine cpu family: x86_64 00:02:25.356 Host machine cpu: x86_64 00:02:25.356 Run-time dependency threads found: YES 00:02:25.356 Library dl found: YES 00:02:25.356 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:25.356 Run-time dependency json-c found: YES 0.17 00:02:25.356 Run-time dependency cmocka found: YES 1.1.7 00:02:25.356 Program pytest-3 found: NO 00:02:25.356 Program flake8 found: NO 00:02:25.356 Program misspell-fixer found: NO 00:02:25.356 Program restructuredtext-lint found: NO 00:02:25.356 Program valgrind found: YES (/usr/bin/valgrind) 00:02:25.356 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:25.356 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:25.356 Compiler for C supports arguments -Wwrite-strings: YES 00:02:25.356 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:25.356 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:25.356 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:25.356 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:25.356 Build targets in project: 8 00:02:25.356 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:25.356 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:25.356 00:02:25.356 libvfio-user 0.0.1 00:02:25.356 00:02:25.356 User defined options 00:02:25.356 buildtype : debug 00:02:25.356 default_library: static 00:02:25.356 libdir : /usr/local/lib 00:02:25.356 00:02:25.356 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:25.616 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:25.875 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:02:25.875 [2/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:25.875 [3/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:02:25.875 [4/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:25.875 [5/36] Compiling C object samples/null.p/null.c.o 00:02:25.875 [6/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:25.875 [7/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:02:25.875 [8/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:25.875 [9/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:02:25.875 [10/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:25.875 [11/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:02:25.875 [12/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:02:25.875 [13/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:25.875 [14/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:25.875 [15/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:25.875 [16/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:02:25.875 [17/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:02:25.875 [18/36] Compiling C object test/unit_tests.p/mocks.c.o 00:02:25.875 [19/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:25.875 [20/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:25.875 [21/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:25.875 [22/36] Compiling C object samples/server.p/server.c.o 00:02:25.875 [23/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:25.875 [24/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:25.875 [25/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:25.875 [26/36] Compiling C object samples/client.p/client.c.o 00:02:25.875 [27/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:25.875 [28/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:02:25.875 [29/36] Linking static target lib/libvfio-user.a 00:02:25.875 [30/36] Linking target samples/client 00:02:25.875 [31/36] Linking target samples/gpio-pci-idio-16 00:02:25.875 [32/36] Linking target test/unit_tests 00:02:25.875 [33/36] Linking target samples/shadow_ioeventfd_server 00:02:25.875 [34/36] Linking target samples/null 00:02:25.875 [35/36] Linking target samples/lspci 00:02:25.875 [36/36] Linking target samples/server 00:02:25.875 INFO: autodetecting backend as ninja 00:02:25.875 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:26.134 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:26.394 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:26.394 ninja: no work to do. 00:02:31.675 The Meson build system 00:02:31.675 Version: 1.5.0 00:02:31.675 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:31.675 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:31.675 Build type: native build 00:02:31.675 Program cat found: YES (/usr/bin/cat) 00:02:31.675 Project name: DPDK 00:02:31.675 Project version: 24.03.0 00:02:31.675 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:31.676 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:31.676 Host machine cpu family: x86_64 00:02:31.676 Host machine cpu: x86_64 00:02:31.676 Message: ## Building in Developer Mode ## 00:02:31.676 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:31.676 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:31.676 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:31.676 Program python3 found: YES (/usr/bin/python3) 00:02:31.676 Program cat found: YES (/usr/bin/cat) 00:02:31.676 Compiler for C supports arguments -march=native: YES 00:02:31.676 Checking for size of "void *" : 8 00:02:31.676 Checking for size of "void *" : 8 (cached) 00:02:31.676 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:31.676 Library m found: YES 00:02:31.676 Library numa found: YES 00:02:31.676 Has header "numaif.h" : YES 00:02:31.676 Library fdt found: NO 00:02:31.676 Library execinfo found: NO 00:02:31.676 Has header "execinfo.h" : YES 00:02:31.676 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:31.676 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:31.676 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:31.676 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:31.676 Run-time dependency openssl found: YES 3.1.1 00:02:31.676 Run-time dependency libpcap found: YES 1.10.4 00:02:31.676 Has header "pcap.h" with dependency libpcap: YES 00:02:31.676 Compiler for C supports arguments -Wcast-qual: YES 00:02:31.676 Compiler for C supports arguments -Wdeprecated: YES 00:02:31.676 Compiler for C supports arguments -Wformat: YES 00:02:31.676 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:31.676 Compiler for C supports arguments -Wformat-security: YES 00:02:31.676 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:31.676 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:31.676 Compiler for C supports arguments -Wnested-externs: YES 00:02:31.676 Compiler for C supports arguments -Wold-style-definition: YES 00:02:31.676 Compiler for C supports arguments -Wpointer-arith: YES 00:02:31.676 Compiler for C supports arguments -Wsign-compare: YES 00:02:31.676 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:31.676 Compiler for C supports arguments -Wundef: YES 00:02:31.676 Compiler for C supports arguments -Wwrite-strings: YES 00:02:31.676 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:31.676 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:31.676 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:31.676 Program objdump found: YES (/usr/bin/objdump) 00:02:31.676 Compiler for C supports arguments -mavx512f: YES 00:02:31.676 Checking if "AVX512 checking" compiles: YES 00:02:31.676 Fetching value of define "__SSE4_2__" : 1 00:02:31.676 Fetching value of define "__AES__" : 1 00:02:31.676 Fetching value of define "__AVX__" : 1 00:02:31.676 Fetching value of define "__AVX2__" : 1 00:02:31.676 Fetching value of define "__AVX512BW__" : 1 00:02:31.676 Fetching value of define "__AVX512CD__" : 1 00:02:31.676 Fetching value of define "__AVX512DQ__" : 1 00:02:31.676 Fetching value of define "__AVX512F__" : 1 00:02:31.676 Fetching value of define "__AVX512VL__" : 1 00:02:31.676 Fetching value of define "__PCLMUL__" : 1 00:02:31.676 Fetching value of define "__RDRND__" : 1 00:02:31.676 Fetching value of define "__RDSEED__" : 1 00:02:31.676 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:31.676 Fetching value of define "__znver1__" : (undefined) 00:02:31.676 Fetching value of define "__znver2__" : (undefined) 00:02:31.676 Fetching value of define "__znver3__" : (undefined) 00:02:31.676 Fetching value of define "__znver4__" : (undefined) 00:02:31.676 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:31.676 Message: lib/log: Defining dependency "log" 00:02:31.676 Message: lib/kvargs: Defining dependency "kvargs" 00:02:31.676 Message: lib/telemetry: Defining dependency "telemetry" 00:02:31.676 Checking for function "getentropy" : NO 00:02:31.676 Message: lib/eal: Defining dependency "eal" 00:02:31.676 Message: lib/ring: Defining dependency "ring" 00:02:31.676 Message: lib/rcu: Defining dependency "rcu" 00:02:31.676 Message: lib/mempool: Defining dependency "mempool" 00:02:31.676 Message: lib/mbuf: Defining dependency "mbuf" 00:02:31.676 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:31.676 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:31.676 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:31.676 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:31.676 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:31.676 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:31.676 Compiler for C supports arguments -mpclmul: YES 00:02:31.676 Compiler for C supports arguments -maes: YES 00:02:31.676 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:31.676 Compiler for C supports arguments -mavx512bw: YES 00:02:31.676 Compiler for C supports arguments -mavx512dq: YES 00:02:31.676 Compiler for C supports arguments -mavx512vl: YES 00:02:31.676 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:31.676 Compiler for C supports arguments -mavx2: YES 00:02:31.676 Compiler for C supports arguments -mavx: YES 00:02:31.676 Message: lib/net: Defining dependency "net" 00:02:31.676 Message: lib/meter: Defining dependency "meter" 00:02:31.676 Message: lib/ethdev: Defining dependency "ethdev" 00:02:31.676 Message: lib/pci: Defining dependency "pci" 00:02:31.676 Message: lib/cmdline: Defining dependency "cmdline" 00:02:31.676 Message: lib/hash: Defining dependency "hash" 00:02:31.676 Message: lib/timer: Defining dependency "timer" 00:02:31.676 Message: lib/compressdev: Defining dependency "compressdev" 00:02:31.676 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:31.676 Message: lib/dmadev: Defining dependency "dmadev" 00:02:31.676 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:31.676 Message: lib/power: Defining dependency "power" 00:02:31.676 Message: lib/reorder: Defining dependency "reorder" 00:02:31.676 Message: lib/security: Defining dependency "security" 00:02:31.676 Has header "linux/userfaultfd.h" : YES 00:02:31.676 Has header "linux/vduse.h" : YES 00:02:31.676 Message: lib/vhost: Defining dependency "vhost" 00:02:31.676 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:31.676 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:31.676 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:31.676 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:31.676 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:31.676 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:31.676 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:31.676 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:31.676 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:31.676 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:31.676 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:31.676 Configuring doxy-api-html.conf using configuration 00:02:31.676 Configuring doxy-api-man.conf using configuration 00:02:31.676 Program mandb found: YES (/usr/bin/mandb) 00:02:31.676 Program sphinx-build found: NO 00:02:31.676 Configuring rte_build_config.h using configuration 00:02:31.676 Message: 00:02:31.676 ================= 00:02:31.676 Applications Enabled 00:02:31.676 ================= 00:02:31.676 00:02:31.676 apps: 00:02:31.676 00:02:31.676 00:02:31.676 Message: 00:02:31.676 ================= 00:02:31.676 Libraries Enabled 00:02:31.676 ================= 00:02:31.676 00:02:31.676 libs: 00:02:31.676 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:31.676 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:31.676 cryptodev, dmadev, power, reorder, security, vhost, 00:02:31.676 00:02:31.676 Message: 00:02:31.676 =============== 00:02:31.676 Drivers Enabled 00:02:31.676 =============== 00:02:31.676 00:02:31.676 common: 00:02:31.676 00:02:31.676 bus: 00:02:31.676 pci, vdev, 00:02:31.676 mempool: 00:02:31.676 ring, 00:02:31.676 dma: 00:02:31.676 00:02:31.676 net: 00:02:31.676 00:02:31.676 crypto: 00:02:31.676 00:02:31.676 compress: 00:02:31.676 00:02:31.676 vdpa: 00:02:31.676 00:02:31.676 00:02:31.676 Message: 00:02:31.676 ================= 00:02:31.676 Content Skipped 00:02:31.676 ================= 00:02:31.676 00:02:31.676 apps: 00:02:31.676 dumpcap: explicitly disabled via build config 00:02:31.676 graph: explicitly disabled via build config 00:02:31.676 pdump: explicitly disabled via build config 00:02:31.676 proc-info: explicitly disabled via build config 00:02:31.676 test-acl: explicitly disabled via build config 00:02:31.676 test-bbdev: explicitly disabled via build config 00:02:31.676 test-cmdline: explicitly disabled via build config 00:02:31.676 test-compress-perf: explicitly disabled via build config 00:02:31.676 test-crypto-perf: explicitly disabled via build config 00:02:31.676 test-dma-perf: explicitly disabled via build config 00:02:31.676 test-eventdev: explicitly disabled via build config 00:02:31.676 test-fib: explicitly disabled via build config 00:02:31.676 test-flow-perf: explicitly disabled via build config 00:02:31.676 test-gpudev: explicitly disabled via build config 00:02:31.676 test-mldev: explicitly disabled via build config 00:02:31.676 test-pipeline: explicitly disabled via build config 00:02:31.676 test-pmd: explicitly disabled via build config 00:02:31.676 test-regex: explicitly disabled via build config 00:02:31.676 test-sad: explicitly disabled via build config 00:02:31.676 test-security-perf: explicitly disabled via build config 00:02:31.676 00:02:31.676 libs: 00:02:31.676 argparse: explicitly disabled via build config 00:02:31.676 metrics: explicitly disabled via build config 00:02:31.676 acl: explicitly disabled via build config 00:02:31.676 bbdev: explicitly disabled via build config 00:02:31.676 bitratestats: explicitly disabled via build config 00:02:31.676 bpf: explicitly disabled via build config 00:02:31.676 cfgfile: explicitly disabled via build config 00:02:31.676 distributor: explicitly disabled via build config 00:02:31.676 efd: explicitly disabled via build config 00:02:31.676 eventdev: explicitly disabled via build config 00:02:31.676 dispatcher: explicitly disabled via build config 00:02:31.676 gpudev: explicitly disabled via build config 00:02:31.676 gro: explicitly disabled via build config 00:02:31.677 gso: explicitly disabled via build config 00:02:31.677 ip_frag: explicitly disabled via build config 00:02:31.677 jobstats: explicitly disabled via build config 00:02:31.677 latencystats: explicitly disabled via build config 00:02:31.677 lpm: explicitly disabled via build config 00:02:31.677 member: explicitly disabled via build config 00:02:31.677 pcapng: explicitly disabled via build config 00:02:31.677 rawdev: explicitly disabled via build config 00:02:31.677 regexdev: explicitly disabled via build config 00:02:31.677 mldev: explicitly disabled via build config 00:02:31.677 rib: explicitly disabled via build config 00:02:31.677 sched: explicitly disabled via build config 00:02:31.677 stack: explicitly disabled via build config 00:02:31.677 ipsec: explicitly disabled via build config 00:02:31.677 pdcp: explicitly disabled via build config 00:02:31.677 fib: explicitly disabled via build config 00:02:31.677 port: explicitly disabled via build config 00:02:31.677 pdump: explicitly disabled via build config 00:02:31.677 table: explicitly disabled via build config 00:02:31.677 pipeline: explicitly disabled via build config 00:02:31.677 graph: explicitly disabled via build config 00:02:31.677 node: explicitly disabled via build config 00:02:31.677 00:02:31.677 drivers: 00:02:31.677 common/cpt: not in enabled drivers build config 00:02:31.677 common/dpaax: not in enabled drivers build config 00:02:31.677 common/iavf: not in enabled drivers build config 00:02:31.677 common/idpf: not in enabled drivers build config 00:02:31.677 common/ionic: not in enabled drivers build config 00:02:31.677 common/mvep: not in enabled drivers build config 00:02:31.677 common/octeontx: not in enabled drivers build config 00:02:31.677 bus/auxiliary: not in enabled drivers build config 00:02:31.677 bus/cdx: not in enabled drivers build config 00:02:31.677 bus/dpaa: not in enabled drivers build config 00:02:31.677 bus/fslmc: not in enabled drivers build config 00:02:31.677 bus/ifpga: not in enabled drivers build config 00:02:31.677 bus/platform: not in enabled drivers build config 00:02:31.677 bus/uacce: not in enabled drivers build config 00:02:31.677 bus/vmbus: not in enabled drivers build config 00:02:31.677 common/cnxk: not in enabled drivers build config 00:02:31.677 common/mlx5: not in enabled drivers build config 00:02:31.677 common/nfp: not in enabled drivers build config 00:02:31.677 common/nitrox: not in enabled drivers build config 00:02:31.677 common/qat: not in enabled drivers build config 00:02:31.677 common/sfc_efx: not in enabled drivers build config 00:02:31.677 mempool/bucket: not in enabled drivers build config 00:02:31.677 mempool/cnxk: not in enabled drivers build config 00:02:31.677 mempool/dpaa: not in enabled drivers build config 00:02:31.677 mempool/dpaa2: not in enabled drivers build config 00:02:31.677 mempool/octeontx: not in enabled drivers build config 00:02:31.677 mempool/stack: not in enabled drivers build config 00:02:31.677 dma/cnxk: not in enabled drivers build config 00:02:31.677 dma/dpaa: not in enabled drivers build config 00:02:31.677 dma/dpaa2: not in enabled drivers build config 00:02:31.677 dma/hisilicon: not in enabled drivers build config 00:02:31.677 dma/idxd: not in enabled drivers build config 00:02:31.677 dma/ioat: not in enabled drivers build config 00:02:31.677 dma/skeleton: not in enabled drivers build config 00:02:31.677 net/af_packet: not in enabled drivers build config 00:02:31.677 net/af_xdp: not in enabled drivers build config 00:02:31.677 net/ark: not in enabled drivers build config 00:02:31.677 net/atlantic: not in enabled drivers build config 00:02:31.677 net/avp: not in enabled drivers build config 00:02:31.677 net/axgbe: not in enabled drivers build config 00:02:31.677 net/bnx2x: not in enabled drivers build config 00:02:31.677 net/bnxt: not in enabled drivers build config 00:02:31.677 net/bonding: not in enabled drivers build config 00:02:31.677 net/cnxk: not in enabled drivers build config 00:02:31.677 net/cpfl: not in enabled drivers build config 00:02:31.677 net/cxgbe: not in enabled drivers build config 00:02:31.677 net/dpaa: not in enabled drivers build config 00:02:31.677 net/dpaa2: not in enabled drivers build config 00:02:31.677 net/e1000: not in enabled drivers build config 00:02:31.677 net/ena: not in enabled drivers build config 00:02:31.677 net/enetc: not in enabled drivers build config 00:02:31.677 net/enetfec: not in enabled drivers build config 00:02:31.677 net/enic: not in enabled drivers build config 00:02:31.677 net/failsafe: not in enabled drivers build config 00:02:31.677 net/fm10k: not in enabled drivers build config 00:02:31.677 net/gve: not in enabled drivers build config 00:02:31.677 net/hinic: not in enabled drivers build config 00:02:31.677 net/hns3: not in enabled drivers build config 00:02:31.677 net/i40e: not in enabled drivers build config 00:02:31.677 net/iavf: not in enabled drivers build config 00:02:31.677 net/ice: not in enabled drivers build config 00:02:31.677 net/idpf: not in enabled drivers build config 00:02:31.677 net/igc: not in enabled drivers build config 00:02:31.677 net/ionic: not in enabled drivers build config 00:02:31.677 net/ipn3ke: not in enabled drivers build config 00:02:31.677 net/ixgbe: not in enabled drivers build config 00:02:31.677 net/mana: not in enabled drivers build config 00:02:31.677 net/memif: not in enabled drivers build config 00:02:31.677 net/mlx4: not in enabled drivers build config 00:02:31.677 net/mlx5: not in enabled drivers build config 00:02:31.677 net/mvneta: not in enabled drivers build config 00:02:31.677 net/mvpp2: not in enabled drivers build config 00:02:31.677 net/netvsc: not in enabled drivers build config 00:02:31.677 net/nfb: not in enabled drivers build config 00:02:31.677 net/nfp: not in enabled drivers build config 00:02:31.677 net/ngbe: not in enabled drivers build config 00:02:31.677 net/null: not in enabled drivers build config 00:02:31.677 net/octeontx: not in enabled drivers build config 00:02:31.677 net/octeon_ep: not in enabled drivers build config 00:02:31.677 net/pcap: not in enabled drivers build config 00:02:31.677 net/pfe: not in enabled drivers build config 00:02:31.677 net/qede: not in enabled drivers build config 00:02:31.677 net/ring: not in enabled drivers build config 00:02:31.677 net/sfc: not in enabled drivers build config 00:02:31.677 net/softnic: not in enabled drivers build config 00:02:31.677 net/tap: not in enabled drivers build config 00:02:31.677 net/thunderx: not in enabled drivers build config 00:02:31.677 net/txgbe: not in enabled drivers build config 00:02:31.677 net/vdev_netvsc: not in enabled drivers build config 00:02:31.677 net/vhost: not in enabled drivers build config 00:02:31.677 net/virtio: not in enabled drivers build config 00:02:31.677 net/vmxnet3: not in enabled drivers build config 00:02:31.677 raw/*: missing internal dependency, "rawdev" 00:02:31.677 crypto/armv8: not in enabled drivers build config 00:02:31.677 crypto/bcmfs: not in enabled drivers build config 00:02:31.677 crypto/caam_jr: not in enabled drivers build config 00:02:31.677 crypto/ccp: not in enabled drivers build config 00:02:31.677 crypto/cnxk: not in enabled drivers build config 00:02:31.677 crypto/dpaa_sec: not in enabled drivers build config 00:02:31.677 crypto/dpaa2_sec: not in enabled drivers build config 00:02:31.677 crypto/ipsec_mb: not in enabled drivers build config 00:02:31.677 crypto/mlx5: not in enabled drivers build config 00:02:31.677 crypto/mvsam: not in enabled drivers build config 00:02:31.677 crypto/nitrox: not in enabled drivers build config 00:02:31.677 crypto/null: not in enabled drivers build config 00:02:31.677 crypto/octeontx: not in enabled drivers build config 00:02:31.677 crypto/openssl: not in enabled drivers build config 00:02:31.677 crypto/scheduler: not in enabled drivers build config 00:02:31.677 crypto/uadk: not in enabled drivers build config 00:02:31.677 crypto/virtio: not in enabled drivers build config 00:02:31.677 compress/isal: not in enabled drivers build config 00:02:31.677 compress/mlx5: not in enabled drivers build config 00:02:31.677 compress/nitrox: not in enabled drivers build config 00:02:31.677 compress/octeontx: not in enabled drivers build config 00:02:31.677 compress/zlib: not in enabled drivers build config 00:02:31.677 regex/*: missing internal dependency, "regexdev" 00:02:31.677 ml/*: missing internal dependency, "mldev" 00:02:31.677 vdpa/ifc: not in enabled drivers build config 00:02:31.677 vdpa/mlx5: not in enabled drivers build config 00:02:31.677 vdpa/nfp: not in enabled drivers build config 00:02:31.677 vdpa/sfc: not in enabled drivers build config 00:02:31.677 event/*: missing internal dependency, "eventdev" 00:02:31.677 baseband/*: missing internal dependency, "bbdev" 00:02:31.677 gpu/*: missing internal dependency, "gpudev" 00:02:31.677 00:02:31.677 00:02:31.937 Build targets in project: 85 00:02:31.937 00:02:31.937 DPDK 24.03.0 00:02:31.937 00:02:31.937 User defined options 00:02:31.937 buildtype : debug 00:02:31.937 default_library : static 00:02:31.937 libdir : lib 00:02:31.937 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:31.937 c_args : -fPIC -Werror 00:02:31.937 c_link_args : 00:02:31.937 cpu_instruction_set: native 00:02:31.937 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:31.937 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:31.937 enable_docs : false 00:02:31.937 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:31.937 enable_kmods : false 00:02:31.937 max_lcores : 128 00:02:31.937 tests : false 00:02:31.937 00:02:31.937 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:32.512 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:32.512 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:32.512 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:32.512 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:32.512 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:32.512 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:32.512 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:32.512 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:32.512 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:32.512 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:32.512 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:32.512 [11/268] Linking static target lib/librte_kvargs.a 00:02:32.512 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:32.512 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:32.512 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:32.512 [15/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:32.512 [16/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:32.512 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:32.512 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:32.512 [19/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:32.513 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:32.513 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:32.513 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:32.513 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:32.513 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:32.513 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:32.513 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:32.513 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:32.513 [28/268] Linking static target lib/librte_log.a 00:02:32.513 [29/268] Linking static target lib/librte_pci.a 00:02:32.513 [30/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:32.513 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:32.513 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:32.772 [33/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:32.772 [34/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:32.772 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:33.032 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:33.032 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:33.032 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:33.032 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:33.032 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:33.032 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:33.032 [42/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:33.032 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:33.032 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:33.032 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:33.032 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:33.032 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:33.032 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:33.032 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:33.032 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:33.032 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:33.032 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:33.032 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:33.032 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:33.032 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:33.032 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:33.032 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:33.032 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:33.032 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:33.032 [60/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:33.032 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:33.032 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:33.032 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:33.032 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:33.032 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:33.032 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:33.032 [67/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:33.032 [68/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:33.032 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:33.032 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:33.032 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:33.032 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:33.032 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:33.032 [74/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:33.032 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:33.032 [76/268] Linking static target lib/librte_telemetry.a 00:02:33.032 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:33.032 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:33.032 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:33.032 [80/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:33.032 [81/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:33.032 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:33.032 [83/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:33.032 [84/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.032 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:33.032 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:33.032 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:33.032 [88/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.032 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:33.032 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:33.032 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:33.033 [92/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:33.033 [93/268] Linking static target lib/librte_ring.a 00:02:33.033 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:33.033 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:33.033 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:33.033 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:33.033 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:33.033 [99/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:33.033 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:33.033 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:33.033 [102/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:33.033 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:33.033 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:33.033 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:33.033 [106/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:33.033 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:33.033 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:33.033 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:33.033 [110/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:33.033 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:33.033 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:33.033 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:33.033 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:33.033 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:33.033 [116/268] Linking static target lib/librte_cmdline.a 00:02:33.033 [117/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:33.033 [118/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:33.033 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:33.033 [120/268] Linking static target lib/librte_meter.a 00:02:33.033 [121/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:33.033 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:33.033 [123/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:33.033 [124/268] Linking static target lib/librte_rcu.a 00:02:33.033 [125/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:33.033 [126/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:33.033 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:33.293 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:33.293 [129/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:33.293 [130/268] Linking static target lib/librte_timer.a 00:02:33.293 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:33.293 [132/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:33.293 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:33.293 [134/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:33.293 [135/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:33.293 [136/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:33.293 [137/268] Linking static target lib/librte_eal.a 00:02:33.293 [138/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:33.293 [139/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:33.293 [140/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:33.293 [141/268] Linking static target lib/librte_net.a 00:02:33.293 [142/268] Linking static target lib/librte_mempool.a 00:02:33.293 [143/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:33.293 [144/268] Linking static target lib/librte_mbuf.a 00:02:33.293 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:33.293 [146/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:33.293 [147/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:33.293 [148/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:33.293 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:33.293 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:33.293 [151/268] Linking static target lib/librte_dmadev.a 00:02:33.293 [152/268] Linking static target lib/librte_compressdev.a 00:02:33.293 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:33.293 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:33.293 [155/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:33.293 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:33.293 [157/268] Linking static target lib/librte_hash.a 00:02:33.293 [158/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.293 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:33.293 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:33.293 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:33.293 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:33.293 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:33.293 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:33.293 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:33.553 [166/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.553 [167/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.553 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:33.553 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:33.553 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:33.553 [171/268] Linking target lib/librte_log.so.24.1 00:02:33.553 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:33.553 [173/268] Linking static target lib/librte_power.a 00:02:33.553 [174/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:33.553 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:33.553 [176/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:33.553 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:33.553 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:33.553 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:33.553 [180/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:33.553 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:33.553 [182/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:33.553 [183/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:33.553 [184/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:33.553 [185/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:33.553 [186/268] Linking static target lib/librte_cryptodev.a 00:02:33.553 [187/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:33.553 [188/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.553 [189/268] Linking static target lib/librte_security.a 00:02:33.553 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:33.553 [191/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:33.553 [192/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.553 [193/268] Linking static target lib/librte_reorder.a 00:02:33.553 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:33.553 [195/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.553 [196/268] Linking target lib/librte_kvargs.so.24.1 00:02:33.553 [197/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.553 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:33.553 [199/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:33.813 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:33.813 [201/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:33.813 [202/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:33.813 [203/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:33.813 [204/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:33.813 [205/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:33.813 [206/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:33.813 [207/268] Linking target lib/librte_telemetry.so.24.1 00:02:33.813 [208/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:33.813 [209/268] Linking static target drivers/librte_mempool_ring.a 00:02:33.813 [210/268] Linking static target drivers/librte_bus_pci.a 00:02:33.813 [211/268] Linking static target drivers/librte_bus_vdev.a 00:02:33.814 [212/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:33.814 [213/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:33.814 [214/268] Linking static target lib/librte_ethdev.a 00:02:33.814 [215/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:33.814 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:34.073 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.073 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.073 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.073 [220/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.073 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.333 [222/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.333 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.333 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.333 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.592 [226/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:34.592 [227/268] Linking static target lib/librte_vhost.a 00:02:34.592 [228/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.592 [229/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.974 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.544 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.672 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.612 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.612 [234/268] Linking target lib/librte_eal.so.24.1 00:02:45.873 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:45.873 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:45.873 [237/268] Linking target lib/librte_timer.so.24.1 00:02:45.873 [238/268] Linking target lib/librte_pci.so.24.1 00:02:45.873 [239/268] Linking target lib/librte_meter.so.24.1 00:02:45.873 [240/268] Linking target lib/librte_ring.so.24.1 00:02:45.873 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:46.133 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:46.133 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:46.133 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:46.133 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:46.133 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:46.133 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:46.133 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:46.133 [249/268] Linking target lib/librte_rcu.so.24.1 00:02:46.133 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:46.133 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:46.393 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:46.393 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:46.393 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:46.393 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:46.393 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:46.393 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:46.393 [258/268] Linking target lib/librte_net.so.24.1 00:02:46.653 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:46.653 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:46.653 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:46.653 [262/268] Linking target lib/librte_security.so.24.1 00:02:46.653 [263/268] Linking target lib/librte_hash.so.24.1 00:02:46.653 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:46.913 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:46.913 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:46.913 [267/268] Linking target lib/librte_vhost.so.24.1 00:02:46.913 [268/268] Linking target lib/librte_power.so.24.1 00:02:46.913 INFO: autodetecting backend as ninja 00:02:46.913 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:48.294 CC lib/log/log.o 00:02:48.294 CC lib/log/log_flags.o 00:02:48.294 CC lib/log/log_deprecated.o 00:02:48.294 CC lib/ut_mock/mock.o 00:02:48.294 CC lib/ut/ut.o 00:02:48.294 LIB libspdk_log.a 00:02:48.294 LIB libspdk_ut.a 00:02:48.294 LIB libspdk_ut_mock.a 00:02:48.554 CC lib/dma/dma.o 00:02:48.554 CC lib/util/base64.o 00:02:48.554 CC lib/util/bit_array.o 00:02:48.554 CC lib/util/cpuset.o 00:02:48.554 CC lib/util/crc16.o 00:02:48.554 CC lib/util/crc32.o 00:02:48.554 CC lib/util/crc32c.o 00:02:48.554 CC lib/util/crc32_ieee.o 00:02:48.554 CC lib/util/crc64.o 00:02:48.554 CC lib/util/dif.o 00:02:48.554 CC lib/util/fd.o 00:02:48.554 CC lib/util/fd_group.o 00:02:48.554 CC lib/util/file.o 00:02:48.554 CC lib/util/hexlify.o 00:02:48.555 CC lib/ioat/ioat.o 00:02:48.555 CC lib/util/iov.o 00:02:48.555 CC lib/util/math.o 00:02:48.555 CC lib/util/net.o 00:02:48.555 CC lib/util/pipe.o 00:02:48.555 CC lib/util/strerror_tls.o 00:02:48.555 CC lib/util/string.o 00:02:48.555 CC lib/util/uuid.o 00:02:48.555 CC lib/util/xor.o 00:02:48.555 CC lib/util/zipf.o 00:02:48.555 CC lib/util/md5.o 00:02:48.555 CXX lib/trace_parser/trace.o 00:02:48.555 CC lib/vfio_user/host/vfio_user.o 00:02:48.555 CC lib/vfio_user/host/vfio_user_pci.o 00:02:48.555 LIB libspdk_dma.a 00:02:48.555 LIB libspdk_ioat.a 00:02:48.819 LIB libspdk_vfio_user.a 00:02:48.819 LIB libspdk_util.a 00:02:49.088 LIB libspdk_trace_parser.a 00:02:49.088 CC lib/env_dpdk/env.o 00:02:49.088 CC lib/conf/conf.o 00:02:49.088 CC lib/env_dpdk/memory.o 00:02:49.088 CC lib/env_dpdk/pci.o 00:02:49.088 CC lib/env_dpdk/init.o 00:02:49.088 CC lib/vmd/vmd.o 00:02:49.088 CC lib/env_dpdk/threads.o 00:02:49.088 CC lib/vmd/led.o 00:02:49.088 CC lib/env_dpdk/pci_ioat.o 00:02:49.088 CC lib/env_dpdk/pci_virtio.o 00:02:49.088 CC lib/env_dpdk/pci_vmd.o 00:02:49.088 CC lib/json/json_parse.o 00:02:49.088 CC lib/json/json_util.o 00:02:49.088 CC lib/env_dpdk/pci_idxd.o 00:02:49.088 CC lib/json/json_write.o 00:02:49.088 CC lib/env_dpdk/pci_event.o 00:02:49.088 CC lib/env_dpdk/sigbus_handler.o 00:02:49.088 CC lib/env_dpdk/pci_dpdk.o 00:02:49.088 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:49.088 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:49.088 CC lib/rdma_utils/rdma_utils.o 00:02:49.088 CC lib/idxd/idxd.o 00:02:49.088 CC lib/idxd/idxd_user.o 00:02:49.088 CC lib/idxd/idxd_kernel.o 00:02:49.375 LIB libspdk_conf.a 00:02:49.375 LIB libspdk_json.a 00:02:49.375 LIB libspdk_rdma_utils.a 00:02:49.655 LIB libspdk_idxd.a 00:02:49.655 LIB libspdk_vmd.a 00:02:49.655 CC lib/jsonrpc/jsonrpc_server.o 00:02:49.655 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:49.655 CC lib/jsonrpc/jsonrpc_client.o 00:02:49.655 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:49.655 CC lib/rdma_provider/common.o 00:02:49.655 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:49.963 LIB libspdk_jsonrpc.a 00:02:49.963 LIB libspdk_rdma_provider.a 00:02:49.963 LIB libspdk_env_dpdk.a 00:02:50.249 CC lib/rpc/rpc.o 00:02:50.249 LIB libspdk_rpc.a 00:02:50.818 CC lib/trace/trace.o 00:02:50.818 CC lib/trace/trace_flags.o 00:02:50.818 CC lib/trace/trace_rpc.o 00:02:50.818 CC lib/notify/notify.o 00:02:50.818 CC lib/notify/notify_rpc.o 00:02:50.818 CC lib/keyring/keyring.o 00:02:50.818 CC lib/keyring/keyring_rpc.o 00:02:50.818 LIB libspdk_notify.a 00:02:50.818 LIB libspdk_trace.a 00:02:50.818 LIB libspdk_keyring.a 00:02:51.387 CC lib/sock/sock.o 00:02:51.387 CC lib/sock/sock_rpc.o 00:02:51.387 CC lib/thread/thread.o 00:02:51.387 CC lib/thread/iobuf.o 00:02:51.387 LIB libspdk_sock.a 00:02:51.956 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:51.956 CC lib/nvme/nvme_ctrlr.o 00:02:51.956 CC lib/nvme/nvme_fabric.o 00:02:51.956 CC lib/nvme/nvme_ns_cmd.o 00:02:51.956 CC lib/nvme/nvme_ns.o 00:02:51.956 CC lib/nvme/nvme_pcie_common.o 00:02:51.956 CC lib/nvme/nvme_pcie.o 00:02:51.956 CC lib/nvme/nvme_qpair.o 00:02:51.956 CC lib/nvme/nvme.o 00:02:51.956 CC lib/nvme/nvme_quirks.o 00:02:51.956 CC lib/nvme/nvme_transport.o 00:02:51.956 CC lib/nvme/nvme_discovery.o 00:02:51.956 CC lib/nvme/nvme_io_msg.o 00:02:51.956 CC lib/nvme/nvme_opal.o 00:02:51.956 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:51.956 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:51.956 CC lib/nvme/nvme_tcp.o 00:02:51.956 CC lib/nvme/nvme_auth.o 00:02:51.956 CC lib/nvme/nvme_poll_group.o 00:02:51.956 CC lib/nvme/nvme_zns.o 00:02:51.956 CC lib/nvme/nvme_stubs.o 00:02:51.956 CC lib/nvme/nvme_cuse.o 00:02:51.956 CC lib/nvme/nvme_vfio_user.o 00:02:51.956 CC lib/nvme/nvme_rdma.o 00:02:51.956 LIB libspdk_thread.a 00:02:52.216 CC lib/blob/blobstore.o 00:02:52.216 CC lib/blob/request.o 00:02:52.216 CC lib/blob/zeroes.o 00:02:52.216 CC lib/blob/blob_bs_dev.o 00:02:52.474 CC lib/vfu_tgt/tgt_endpoint.o 00:02:52.474 CC lib/vfu_tgt/tgt_rpc.o 00:02:52.474 CC lib/accel/accel.o 00:02:52.474 CC lib/accel/accel_rpc.o 00:02:52.474 CC lib/accel/accel_sw.o 00:02:52.474 CC lib/virtio/virtio.o 00:02:52.474 CC lib/virtio/virtio_vhost_user.o 00:02:52.474 CC lib/virtio/virtio_vfio_user.o 00:02:52.474 CC lib/virtio/virtio_pci.o 00:02:52.474 CC lib/init/subsystem_rpc.o 00:02:52.474 CC lib/init/subsystem.o 00:02:52.474 CC lib/init/json_config.o 00:02:52.474 CC lib/fsdev/fsdev.o 00:02:52.474 CC lib/init/rpc.o 00:02:52.474 CC lib/fsdev/fsdev_io.o 00:02:52.474 CC lib/fsdev/fsdev_rpc.o 00:02:52.474 LIB libspdk_init.a 00:02:52.474 LIB libspdk_virtio.a 00:02:52.474 LIB libspdk_vfu_tgt.a 00:02:52.734 LIB libspdk_fsdev.a 00:02:52.734 CC lib/event/log_rpc.o 00:02:52.734 CC lib/event/app.o 00:02:52.734 CC lib/event/reactor.o 00:02:52.734 CC lib/event/app_rpc.o 00:02:52.734 CC lib/event/scheduler_static.o 00:02:52.993 LIB libspdk_event.a 00:02:52.993 LIB libspdk_accel.a 00:02:52.993 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:53.252 LIB libspdk_nvme.a 00:02:53.511 CC lib/bdev/bdev.o 00:02:53.511 CC lib/bdev/bdev_rpc.o 00:02:53.511 CC lib/bdev/bdev_zone.o 00:02:53.511 CC lib/bdev/part.o 00:02:53.511 CC lib/bdev/scsi_nvme.o 00:02:53.511 LIB libspdk_fuse_dispatcher.a 00:02:54.080 LIB libspdk_blob.a 00:02:54.340 CC lib/blobfs/blobfs.o 00:02:54.340 CC lib/blobfs/tree.o 00:02:54.340 CC lib/lvol/lvol.o 00:02:54.909 LIB libspdk_lvol.a 00:02:54.909 LIB libspdk_blobfs.a 00:02:55.169 LIB libspdk_bdev.a 00:02:55.429 CC lib/nbd/nbd.o 00:02:55.429 CC lib/nbd/nbd_rpc.o 00:02:55.429 CC lib/ublk/ublk.o 00:02:55.429 CC lib/ublk/ublk_rpc.o 00:02:55.429 CC lib/scsi/dev.o 00:02:55.429 CC lib/scsi/lun.o 00:02:55.429 CC lib/scsi/port.o 00:02:55.429 CC lib/scsi/scsi.o 00:02:55.429 CC lib/scsi/scsi_bdev.o 00:02:55.429 CC lib/scsi/scsi_pr.o 00:02:55.429 CC lib/scsi/scsi_rpc.o 00:02:55.429 CC lib/scsi/task.o 00:02:55.429 CC lib/nvmf/ctrlr_discovery.o 00:02:55.429 CC lib/nvmf/ctrlr.o 00:02:55.429 CC lib/nvmf/nvmf.o 00:02:55.429 CC lib/ftl/ftl_core.o 00:02:55.429 CC lib/nvmf/ctrlr_bdev.o 00:02:55.429 CC lib/nvmf/subsystem.o 00:02:55.429 CC lib/ftl/ftl_init.o 00:02:55.429 CC lib/ftl/ftl_io.o 00:02:55.429 CC lib/nvmf/transport.o 00:02:55.429 CC lib/ftl/ftl_layout.o 00:02:55.429 CC lib/ftl/ftl_sb.o 00:02:55.429 CC lib/ftl/ftl_debug.o 00:02:55.429 CC lib/nvmf/nvmf_rpc.o 00:02:55.429 CC lib/nvmf/stubs.o 00:02:55.429 CC lib/ftl/ftl_l2p.o 00:02:55.429 CC lib/nvmf/mdns_server.o 00:02:55.429 CC lib/nvmf/tcp.o 00:02:55.429 CC lib/ftl/ftl_band.o 00:02:55.429 CC lib/ftl/ftl_l2p_flat.o 00:02:55.429 CC lib/nvmf/rdma.o 00:02:55.429 CC lib/ftl/ftl_nv_cache.o 00:02:55.429 CC lib/nvmf/vfio_user.o 00:02:55.429 CC lib/nvmf/auth.o 00:02:55.688 CC lib/ftl/ftl_band_ops.o 00:02:55.688 CC lib/ftl/ftl_writer.o 00:02:55.688 CC lib/ftl/ftl_rq.o 00:02:55.688 CC lib/ftl/ftl_reloc.o 00:02:55.688 CC lib/ftl/ftl_l2p_cache.o 00:02:55.688 CC lib/ftl/ftl_p2l.o 00:02:55.688 CC lib/ftl/ftl_p2l_log.o 00:02:55.688 CC lib/ftl/mngt/ftl_mngt.o 00:02:55.689 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:55.689 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:55.689 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:55.689 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:55.689 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:55.689 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:55.689 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:55.689 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:55.689 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:55.689 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:55.689 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:55.689 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:55.689 CC lib/ftl/utils/ftl_md.o 00:02:55.689 CC lib/ftl/utils/ftl_conf.o 00:02:55.689 CC lib/ftl/utils/ftl_mempool.o 00:02:55.689 CC lib/ftl/utils/ftl_property.o 00:02:55.689 CC lib/ftl/utils/ftl_bitmap.o 00:02:55.689 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:55.689 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:55.689 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:55.689 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:55.689 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:55.689 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:55.689 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:55.689 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:55.689 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:55.689 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:55.689 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:55.689 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:55.689 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:55.689 CC lib/ftl/base/ftl_base_dev.o 00:02:55.689 CC lib/ftl/ftl_trace.o 00:02:55.689 CC lib/ftl/base/ftl_base_bdev.o 00:02:55.948 LIB libspdk_scsi.a 00:02:55.948 LIB libspdk_nbd.a 00:02:55.948 LIB libspdk_ublk.a 00:02:56.207 CC lib/iscsi/conn.o 00:02:56.207 CC lib/iscsi/param.o 00:02:56.207 CC lib/iscsi/init_grp.o 00:02:56.207 CC lib/iscsi/iscsi.o 00:02:56.207 CC lib/iscsi/tgt_node.o 00:02:56.207 CC lib/iscsi/iscsi_subsystem.o 00:02:56.207 CC lib/iscsi/portal_grp.o 00:02:56.207 CC lib/iscsi/iscsi_rpc.o 00:02:56.207 CC lib/iscsi/task.o 00:02:56.207 CC lib/vhost/vhost.o 00:02:56.207 CC lib/vhost/vhost_rpc.o 00:02:56.207 CC lib/vhost/vhost_scsi.o 00:02:56.207 CC lib/vhost/vhost_blk.o 00:02:56.207 CC lib/vhost/rte_vhost_user.o 00:02:56.207 LIB libspdk_ftl.a 00:02:56.776 LIB libspdk_nvmf.a 00:02:56.776 LIB libspdk_vhost.a 00:02:57.036 LIB libspdk_iscsi.a 00:02:57.604 CC module/vfu_device/vfu_virtio_scsi.o 00:02:57.604 CC module/vfu_device/vfu_virtio.o 00:02:57.604 CC module/vfu_device/vfu_virtio_blk.o 00:02:57.604 CC module/vfu_device/vfu_virtio_rpc.o 00:02:57.604 CC module/vfu_device/vfu_virtio_fs.o 00:02:57.604 CC module/env_dpdk/env_dpdk_rpc.o 00:02:57.604 LIB libspdk_env_dpdk_rpc.a 00:02:57.604 CC module/fsdev/aio/fsdev_aio.o 00:02:57.604 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:57.604 CC module/fsdev/aio/linux_aio_mgr.o 00:02:57.604 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:57.604 CC module/accel/iaa/accel_iaa.o 00:02:57.604 CC module/accel/iaa/accel_iaa_rpc.o 00:02:57.605 CC module/keyring/file/keyring.o 00:02:57.605 CC module/keyring/file/keyring_rpc.o 00:02:57.605 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:57.605 CC module/blob/bdev/blob_bdev.o 00:02:57.605 CC module/accel/error/accel_error.o 00:02:57.605 CC module/accel/error/accel_error_rpc.o 00:02:57.605 CC module/sock/posix/posix.o 00:02:57.605 CC module/scheduler/gscheduler/gscheduler.o 00:02:57.605 CC module/accel/ioat/accel_ioat.o 00:02:57.605 CC module/accel/ioat/accel_ioat_rpc.o 00:02:57.605 CC module/accel/dsa/accel_dsa.o 00:02:57.605 CC module/keyring/linux/keyring.o 00:02:57.605 CC module/accel/dsa/accel_dsa_rpc.o 00:02:57.605 CC module/keyring/linux/keyring_rpc.o 00:02:57.864 LIB libspdk_keyring_file.a 00:02:57.864 LIB libspdk_scheduler_dpdk_governor.a 00:02:57.864 LIB libspdk_keyring_linux.a 00:02:57.864 LIB libspdk_scheduler_gscheduler.a 00:02:57.864 LIB libspdk_scheduler_dynamic.a 00:02:57.864 LIB libspdk_accel_iaa.a 00:02:57.864 LIB libspdk_accel_error.a 00:02:57.864 LIB libspdk_accel_ioat.a 00:02:57.864 LIB libspdk_blob_bdev.a 00:02:57.864 LIB libspdk_vfu_device.a 00:02:57.864 LIB libspdk_accel_dsa.a 00:02:58.124 LIB libspdk_fsdev_aio.a 00:02:58.124 LIB libspdk_sock_posix.a 00:02:58.383 CC module/bdev/delay/vbdev_delay.o 00:02:58.383 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:58.383 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:58.384 CC module/bdev/lvol/vbdev_lvol.o 00:02:58.384 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:58.384 CC module/bdev/gpt/gpt.o 00:02:58.384 CC module/bdev/gpt/vbdev_gpt.o 00:02:58.384 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:58.384 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:58.384 CC module/bdev/error/vbdev_error.o 00:02:58.384 CC module/bdev/split/vbdev_split_rpc.o 00:02:58.384 CC module/bdev/malloc/bdev_malloc.o 00:02:58.384 CC module/bdev/split/vbdev_split.o 00:02:58.384 CC module/bdev/error/vbdev_error_rpc.o 00:02:58.384 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:58.384 CC module/bdev/nvme/bdev_nvme.o 00:02:58.384 CC module/bdev/nvme/nvme_rpc.o 00:02:58.384 CC module/bdev/nvme/bdev_mdns_client.o 00:02:58.384 CC module/bdev/nvme/vbdev_opal.o 00:02:58.384 CC module/blobfs/bdev/blobfs_bdev.o 00:02:58.384 CC module/bdev/passthru/vbdev_passthru.o 00:02:58.384 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:58.384 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:58.384 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:58.384 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:58.384 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:58.384 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:58.384 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:58.384 CC module/bdev/null/bdev_null.o 00:02:58.384 CC module/bdev/null/bdev_null_rpc.o 00:02:58.384 CC module/bdev/aio/bdev_aio.o 00:02:58.384 CC module/bdev/ftl/bdev_ftl.o 00:02:58.384 CC module/bdev/aio/bdev_aio_rpc.o 00:02:58.384 CC module/bdev/iscsi/bdev_iscsi.o 00:02:58.384 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:58.384 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:58.384 CC module/bdev/raid/bdev_raid.o 00:02:58.384 CC module/bdev/raid/bdev_raid_rpc.o 00:02:58.384 CC module/bdev/raid/bdev_raid_sb.o 00:02:58.384 CC module/bdev/raid/raid0.o 00:02:58.384 CC module/bdev/raid/raid1.o 00:02:58.384 CC module/bdev/raid/concat.o 00:02:58.384 LIB libspdk_blobfs_bdev.a 00:02:58.643 LIB libspdk_bdev_split.a 00:02:58.643 LIB libspdk_bdev_error.a 00:02:58.643 LIB libspdk_bdev_gpt.a 00:02:58.643 LIB libspdk_bdev_null.a 00:02:58.643 LIB libspdk_bdev_zone_block.a 00:02:58.643 LIB libspdk_bdev_ftl.a 00:02:58.643 LIB libspdk_bdev_delay.a 00:02:58.643 LIB libspdk_bdev_passthru.a 00:02:58.643 LIB libspdk_bdev_malloc.a 00:02:58.643 LIB libspdk_bdev_aio.a 00:02:58.643 LIB libspdk_bdev_iscsi.a 00:02:58.643 LIB libspdk_bdev_lvol.a 00:02:58.643 LIB libspdk_bdev_virtio.a 00:02:58.902 LIB libspdk_bdev_raid.a 00:02:59.838 LIB libspdk_bdev_nvme.a 00:03:00.407 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:00.407 CC module/event/subsystems/vmd/vmd.o 00:03:00.407 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:00.407 CC module/event/subsystems/keyring/keyring.o 00:03:00.407 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:00.407 CC module/event/subsystems/fsdev/fsdev.o 00:03:00.407 CC module/event/subsystems/iobuf/iobuf.o 00:03:00.407 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:00.407 CC module/event/subsystems/scheduler/scheduler.o 00:03:00.407 CC module/event/subsystems/sock/sock.o 00:03:00.666 LIB libspdk_event_keyring.a 00:03:00.666 LIB libspdk_event_vhost_blk.a 00:03:00.666 LIB libspdk_event_fsdev.a 00:03:00.666 LIB libspdk_event_vmd.a 00:03:00.666 LIB libspdk_event_vfu_tgt.a 00:03:00.666 LIB libspdk_event_sock.a 00:03:00.666 LIB libspdk_event_scheduler.a 00:03:00.666 LIB libspdk_event_iobuf.a 00:03:00.925 CC module/event/subsystems/accel/accel.o 00:03:00.925 LIB libspdk_event_accel.a 00:03:01.494 CC module/event/subsystems/bdev/bdev.o 00:03:01.494 LIB libspdk_event_bdev.a 00:03:01.753 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:01.753 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:01.753 CC module/event/subsystems/ublk/ublk.o 00:03:01.753 CC module/event/subsystems/nbd/nbd.o 00:03:01.753 CC module/event/subsystems/scsi/scsi.o 00:03:02.013 LIB libspdk_event_nbd.a 00:03:02.013 LIB libspdk_event_ublk.a 00:03:02.013 LIB libspdk_event_scsi.a 00:03:02.013 LIB libspdk_event_nvmf.a 00:03:02.272 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:02.272 CC module/event/subsystems/iscsi/iscsi.o 00:03:02.532 LIB libspdk_event_vhost_scsi.a 00:03:02.532 LIB libspdk_event_iscsi.a 00:03:02.791 CC app/spdk_nvme_perf/perf.o 00:03:02.791 CC app/spdk_nvme_identify/identify.o 00:03:02.791 CC app/trace_record/trace_record.o 00:03:02.791 CXX app/trace/trace.o 00:03:02.791 CC app/spdk_lspci/spdk_lspci.o 00:03:02.791 CC app/spdk_nvme_discover/discovery_aer.o 00:03:02.791 CC app/spdk_top/spdk_top.o 00:03:02.791 TEST_HEADER include/spdk/accel.h 00:03:02.791 TEST_HEADER include/spdk/accel_module.h 00:03:02.791 TEST_HEADER include/spdk/assert.h 00:03:02.791 TEST_HEADER include/spdk/bdev.h 00:03:02.791 CC test/rpc_client/rpc_client_test.o 00:03:02.791 TEST_HEADER include/spdk/base64.h 00:03:02.791 TEST_HEADER include/spdk/barrier.h 00:03:02.791 TEST_HEADER include/spdk/bdev_zone.h 00:03:02.792 TEST_HEADER include/spdk/bdev_module.h 00:03:02.792 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:02.792 TEST_HEADER include/spdk/bit_pool.h 00:03:02.792 CC app/spdk_dd/spdk_dd.o 00:03:02.792 TEST_HEADER include/spdk/bit_array.h 00:03:02.792 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:02.792 TEST_HEADER include/spdk/blobfs.h 00:03:02.792 TEST_HEADER include/spdk/blob_bdev.h 00:03:02.792 TEST_HEADER include/spdk/conf.h 00:03:02.792 TEST_HEADER include/spdk/blob.h 00:03:02.792 TEST_HEADER include/spdk/cpuset.h 00:03:02.792 TEST_HEADER include/spdk/config.h 00:03:02.792 TEST_HEADER include/spdk/crc32.h 00:03:02.792 TEST_HEADER include/spdk/crc16.h 00:03:02.792 TEST_HEADER include/spdk/crc64.h 00:03:02.792 TEST_HEADER include/spdk/dma.h 00:03:02.792 TEST_HEADER include/spdk/dif.h 00:03:02.792 CC app/nvmf_tgt/nvmf_main.o 00:03:02.792 TEST_HEADER include/spdk/env_dpdk.h 00:03:02.792 TEST_HEADER include/spdk/endian.h 00:03:02.792 TEST_HEADER include/spdk/env.h 00:03:02.792 TEST_HEADER include/spdk/fd.h 00:03:02.792 TEST_HEADER include/spdk/event.h 00:03:02.792 TEST_HEADER include/spdk/file.h 00:03:02.792 TEST_HEADER include/spdk/fd_group.h 00:03:02.792 TEST_HEADER include/spdk/fsdev_module.h 00:03:02.792 TEST_HEADER include/spdk/ftl.h 00:03:02.792 TEST_HEADER include/spdk/gpt_spec.h 00:03:02.792 TEST_HEADER include/spdk/fsdev.h 00:03:02.792 TEST_HEADER include/spdk/histogram_data.h 00:03:02.792 TEST_HEADER include/spdk/idxd.h 00:03:02.792 TEST_HEADER include/spdk/hexlify.h 00:03:02.792 TEST_HEADER include/spdk/init.h 00:03:02.792 TEST_HEADER include/spdk/ioat.h 00:03:02.792 TEST_HEADER include/spdk/idxd_spec.h 00:03:02.792 TEST_HEADER include/spdk/ioat_spec.h 00:03:02.792 TEST_HEADER include/spdk/iscsi_spec.h 00:03:02.792 TEST_HEADER include/spdk/jsonrpc.h 00:03:02.792 TEST_HEADER include/spdk/json.h 00:03:02.792 CC app/iscsi_tgt/iscsi_tgt.o 00:03:02.792 TEST_HEADER include/spdk/keyring.h 00:03:02.792 TEST_HEADER include/spdk/likely.h 00:03:02.792 TEST_HEADER include/spdk/log.h 00:03:02.792 TEST_HEADER include/spdk/lvol.h 00:03:02.792 TEST_HEADER include/spdk/keyring_module.h 00:03:02.792 TEST_HEADER include/spdk/md5.h 00:03:02.792 TEST_HEADER include/spdk/memory.h 00:03:02.792 TEST_HEADER include/spdk/nbd.h 00:03:02.792 TEST_HEADER include/spdk/net.h 00:03:02.792 TEST_HEADER include/spdk/mmio.h 00:03:02.792 TEST_HEADER include/spdk/notify.h 00:03:02.792 TEST_HEADER include/spdk/nvme.h 00:03:02.792 TEST_HEADER include/spdk/nvme_intel.h 00:03:02.792 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:02.792 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:02.792 TEST_HEADER include/spdk/nvme_spec.h 00:03:02.792 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:02.792 TEST_HEADER include/spdk/nvmf.h 00:03:02.792 TEST_HEADER include/spdk/nvme_zns.h 00:03:02.792 TEST_HEADER include/spdk/nvmf_transport.h 00:03:02.792 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:02.792 TEST_HEADER include/spdk/nvmf_spec.h 00:03:02.792 TEST_HEADER include/spdk/opal.h 00:03:02.792 TEST_HEADER include/spdk/opal_spec.h 00:03:02.792 CC app/spdk_tgt/spdk_tgt.o 00:03:02.792 TEST_HEADER include/spdk/pipe.h 00:03:03.056 TEST_HEADER include/spdk/pci_ids.h 00:03:03.056 TEST_HEADER include/spdk/scheduler.h 00:03:03.056 TEST_HEADER include/spdk/queue.h 00:03:03.057 TEST_HEADER include/spdk/reduce.h 00:03:03.057 TEST_HEADER include/spdk/rpc.h 00:03:03.057 TEST_HEADER include/spdk/scsi.h 00:03:03.057 TEST_HEADER include/spdk/scsi_spec.h 00:03:03.057 TEST_HEADER include/spdk/string.h 00:03:03.057 TEST_HEADER include/spdk/sock.h 00:03:03.057 TEST_HEADER include/spdk/stdinc.h 00:03:03.057 TEST_HEADER include/spdk/trace.h 00:03:03.057 TEST_HEADER include/spdk/thread.h 00:03:03.057 TEST_HEADER include/spdk/trace_parser.h 00:03:03.057 TEST_HEADER include/spdk/util.h 00:03:03.057 TEST_HEADER include/spdk/tree.h 00:03:03.057 TEST_HEADER include/spdk/version.h 00:03:03.057 TEST_HEADER include/spdk/ublk.h 00:03:03.057 TEST_HEADER include/spdk/uuid.h 00:03:03.057 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:03.057 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:03.057 TEST_HEADER include/spdk/xor.h 00:03:03.057 TEST_HEADER include/spdk/vmd.h 00:03:03.057 TEST_HEADER include/spdk/vhost.h 00:03:03.057 CXX test/cpp_headers/accel_module.o 00:03:03.057 CXX test/cpp_headers/accel.o 00:03:03.057 TEST_HEADER include/spdk/zipf.h 00:03:03.057 CXX test/cpp_headers/assert.o 00:03:03.057 CXX test/cpp_headers/barrier.o 00:03:03.057 CXX test/cpp_headers/base64.o 00:03:03.057 CXX test/cpp_headers/bdev.o 00:03:03.057 CXX test/cpp_headers/bdev_zone.o 00:03:03.057 CXX test/cpp_headers/bdev_module.o 00:03:03.057 CXX test/cpp_headers/blob_bdev.o 00:03:03.057 CXX test/cpp_headers/bit_array.o 00:03:03.057 CXX test/cpp_headers/bit_pool.o 00:03:03.057 CXX test/cpp_headers/blobfs.o 00:03:03.057 CXX test/cpp_headers/config.o 00:03:03.057 CXX test/cpp_headers/blobfs_bdev.o 00:03:03.057 CXX test/cpp_headers/blob.o 00:03:03.057 CXX test/cpp_headers/conf.o 00:03:03.057 CXX test/cpp_headers/cpuset.o 00:03:03.057 CXX test/cpp_headers/crc16.o 00:03:03.057 CXX test/cpp_headers/crc32.o 00:03:03.057 CXX test/cpp_headers/endian.o 00:03:03.057 CXX test/cpp_headers/crc64.o 00:03:03.057 CXX test/cpp_headers/dif.o 00:03:03.057 CXX test/cpp_headers/dma.o 00:03:03.057 CXX test/cpp_headers/env.o 00:03:03.057 CXX test/cpp_headers/event.o 00:03:03.057 CXX test/cpp_headers/env_dpdk.o 00:03:03.057 CC examples/util/zipf/zipf.o 00:03:03.057 CXX test/cpp_headers/file.o 00:03:03.057 CXX test/cpp_headers/fd_group.o 00:03:03.057 CXX test/cpp_headers/fsdev.o 00:03:03.057 CXX test/cpp_headers/fd.o 00:03:03.057 CXX test/cpp_headers/ftl.o 00:03:03.057 CXX test/cpp_headers/fsdev_module.o 00:03:03.057 CXX test/cpp_headers/gpt_spec.o 00:03:03.057 CXX test/cpp_headers/histogram_data.o 00:03:03.057 CXX test/cpp_headers/hexlify.o 00:03:03.057 CXX test/cpp_headers/idxd.o 00:03:03.057 CC examples/ioat/perf/perf.o 00:03:03.057 CXX test/cpp_headers/idxd_spec.o 00:03:03.057 CXX test/cpp_headers/init.o 00:03:03.057 CXX test/cpp_headers/ioat.o 00:03:03.057 CXX test/cpp_headers/ioat_spec.o 00:03:03.057 CC examples/ioat/verify/verify.o 00:03:03.057 CXX test/cpp_headers/json.o 00:03:03.057 CXX test/cpp_headers/iscsi_spec.o 00:03:03.057 CXX test/cpp_headers/jsonrpc.o 00:03:03.057 CXX test/cpp_headers/keyring.o 00:03:03.057 CXX test/cpp_headers/keyring_module.o 00:03:03.057 CXX test/cpp_headers/likely.o 00:03:03.057 CXX test/cpp_headers/md5.o 00:03:03.057 CXX test/cpp_headers/lvol.o 00:03:03.057 CXX test/cpp_headers/log.o 00:03:03.057 CXX test/cpp_headers/memory.o 00:03:03.057 CXX test/cpp_headers/nbd.o 00:03:03.057 CXX test/cpp_headers/mmio.o 00:03:03.057 CXX test/cpp_headers/net.o 00:03:03.057 CXX test/cpp_headers/notify.o 00:03:03.057 CXX test/cpp_headers/nvme.o 00:03:03.057 CXX test/cpp_headers/nvme_intel.o 00:03:03.057 CXX test/cpp_headers/nvme_ocssd.o 00:03:03.057 CC test/app/stub/stub.o 00:03:03.057 CXX test/cpp_headers/nvme_spec.o 00:03:03.057 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:03.057 CXX test/cpp_headers/nvmf_cmd.o 00:03:03.057 CXX test/cpp_headers/nvme_zns.o 00:03:03.057 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:03.057 CXX test/cpp_headers/nvmf.o 00:03:03.057 CXX test/cpp_headers/nvmf_spec.o 00:03:03.057 CXX test/cpp_headers/nvmf_transport.o 00:03:03.057 CXX test/cpp_headers/opal.o 00:03:03.057 CXX test/cpp_headers/opal_spec.o 00:03:03.057 CXX test/cpp_headers/pci_ids.o 00:03:03.057 CXX test/cpp_headers/pipe.o 00:03:03.057 LINK spdk_lspci 00:03:03.057 CXX test/cpp_headers/queue.o 00:03:03.057 CXX test/cpp_headers/rpc.o 00:03:03.057 CXX test/cpp_headers/reduce.o 00:03:03.057 CXX test/cpp_headers/scheduler.o 00:03:03.057 CXX test/cpp_headers/scsi.o 00:03:03.057 CXX test/cpp_headers/scsi_spec.o 00:03:03.057 CXX test/cpp_headers/sock.o 00:03:03.057 CXX test/cpp_headers/stdinc.o 00:03:03.057 CXX test/cpp_headers/string.o 00:03:03.057 CXX test/cpp_headers/trace.o 00:03:03.057 CXX test/cpp_headers/thread.o 00:03:03.057 CXX test/cpp_headers/trace_parser.o 00:03:03.057 CXX test/cpp_headers/tree.o 00:03:03.057 CC test/app/jsoncat/jsoncat.o 00:03:03.057 CC test/thread/lock/spdk_lock.o 00:03:03.057 CC test/app/histogram_perf/histogram_perf.o 00:03:03.057 CC test/thread/poller_perf/poller_perf.o 00:03:03.057 CC test/env/vtophys/vtophys.o 00:03:03.057 CC app/fio/nvme/fio_plugin.o 00:03:03.057 CC test/env/memory/memory_ut.o 00:03:03.057 CXX test/cpp_headers/ublk.o 00:03:03.057 CC test/env/pci/pci_ut.o 00:03:03.057 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:03.057 CXX test/cpp_headers/util.o 00:03:03.057 CC test/dma/test_dma/test_dma.o 00:03:03.057 CC test/app/bdev_svc/bdev_svc.o 00:03:03.057 LINK rpc_client_test 00:03:03.057 LINK interrupt_tgt 00:03:03.057 LINK spdk_trace_record 00:03:03.057 LINK spdk_nvme_discover 00:03:03.057 CC app/fio/bdev/fio_plugin.o 00:03:03.057 LINK nvmf_tgt 00:03:03.057 CC test/env/mem_callbacks/mem_callbacks.o 00:03:03.057 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:03.057 LINK iscsi_tgt 00:03:03.057 CXX test/cpp_headers/uuid.o 00:03:03.057 LINK zipf 00:03:03.057 CXX test/cpp_headers/version.o 00:03:03.316 CXX test/cpp_headers/vfio_user_pci.o 00:03:03.316 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:03.316 CXX test/cpp_headers/vfio_user_spec.o 00:03:03.316 CXX test/cpp_headers/vhost.o 00:03:03.316 CXX test/cpp_headers/vmd.o 00:03:03.316 CXX test/cpp_headers/xor.o 00:03:03.316 CXX test/cpp_headers/zipf.o 00:03:03.316 LINK jsoncat 00:03:03.317 LINK histogram_perf 00:03:03.317 LINK vtophys 00:03:03.317 LINK poller_perf 00:03:03.317 LINK spdk_tgt 00:03:03.317 LINK verify 00:03:03.317 LINK stub 00:03:03.317 LINK env_dpdk_post_init 00:03:03.317 LINK ioat_perf 00:03:03.317 LINK bdev_svc 00:03:03.317 LINK spdk_trace 00:03:03.317 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:03.317 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:03.317 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:03:03.317 LINK spdk_dd 00:03:03.317 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:03:03.317 LINK pci_ut 00:03:03.317 LINK spdk_nvme_identify 00:03:03.576 LINK test_dma 00:03:03.576 LINK spdk_nvme 00:03:03.576 LINK nvme_fuzz 00:03:03.576 LINK spdk_bdev 00:03:03.576 LINK spdk_nvme_perf 00:03:03.576 LINK mem_callbacks 00:03:03.576 LINK llvm_vfio_fuzz 00:03:03.576 LINK vhost_fuzz 00:03:03.576 LINK spdk_top 00:03:03.834 LINK llvm_nvme_fuzz 00:03:03.834 CC app/vhost/vhost.o 00:03:03.834 CC examples/vmd/lsvmd/lsvmd.o 00:03:03.834 CC examples/idxd/perf/perf.o 00:03:03.834 CC examples/vmd/led/led.o 00:03:03.834 CC examples/sock/hello_world/hello_sock.o 00:03:03.834 LINK memory_ut 00:03:03.834 CC examples/thread/thread/thread_ex.o 00:03:04.093 LINK lsvmd 00:03:04.093 LINK led 00:03:04.093 LINK vhost 00:03:04.093 LINK spdk_lock 00:03:04.093 LINK hello_sock 00:03:04.093 LINK idxd_perf 00:03:04.093 LINK thread 00:03:04.093 LINK iscsi_fuzz 00:03:04.660 CC test/event/reactor_perf/reactor_perf.o 00:03:04.660 CC test/event/reactor/reactor.o 00:03:04.660 CC test/event/event_perf/event_perf.o 00:03:04.660 CC test/event/app_repeat/app_repeat.o 00:03:04.660 CC test/event/scheduler/scheduler.o 00:03:04.660 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:04.660 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:04.660 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:04.918 CC examples/nvme/arbitration/arbitration.o 00:03:04.918 CC examples/nvme/abort/abort.o 00:03:04.918 CC examples/nvme/hotplug/hotplug.o 00:03:04.918 CC examples/nvme/reconnect/reconnect.o 00:03:04.918 CC examples/nvme/hello_world/hello_world.o 00:03:04.918 LINK reactor_perf 00:03:04.918 LINK event_perf 00:03:04.918 LINK reactor 00:03:04.918 LINK app_repeat 00:03:04.918 LINK pmr_persistence 00:03:04.918 LINK cmb_copy 00:03:04.918 LINK scheduler 00:03:04.918 LINK hotplug 00:03:04.918 LINK hello_world 00:03:04.918 LINK reconnect 00:03:04.918 LINK arbitration 00:03:05.176 LINK abort 00:03:05.177 LINK nvme_manage 00:03:05.177 CC test/nvme/simple_copy/simple_copy.o 00:03:05.177 CC test/nvme/fdp/fdp.o 00:03:05.177 CC test/nvme/reserve/reserve.o 00:03:05.177 CC test/nvme/err_injection/err_injection.o 00:03:05.177 CC test/nvme/e2edp/nvme_dp.o 00:03:05.177 CC test/nvme/overhead/overhead.o 00:03:05.177 CC test/nvme/sgl/sgl.o 00:03:05.177 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:05.177 CC test/nvme/aer/aer.o 00:03:05.177 CC test/nvme/fused_ordering/fused_ordering.o 00:03:05.177 CC test/nvme/startup/startup.o 00:03:05.177 CC test/nvme/compliance/nvme_compliance.o 00:03:05.177 CC test/nvme/boot_partition/boot_partition.o 00:03:05.177 CC test/nvme/cuse/cuse.o 00:03:05.177 CC test/nvme/reset/reset.o 00:03:05.177 CC test/nvme/connect_stress/connect_stress.o 00:03:05.177 CC test/accel/dif/dif.o 00:03:05.177 CC test/blobfs/mkfs/mkfs.o 00:03:05.177 CC test/lvol/esnap/esnap.o 00:03:05.177 LINK startup 00:03:05.177 LINK boot_partition 00:03:05.177 LINK connect_stress 00:03:05.177 LINK doorbell_aers 00:03:05.177 LINK fused_ordering 00:03:05.177 LINK simple_copy 00:03:05.177 LINK reserve 00:03:05.177 LINK err_injection 00:03:05.435 LINK sgl 00:03:05.435 LINK aer 00:03:05.435 LINK overhead 00:03:05.435 LINK nvme_dp 00:03:05.435 LINK mkfs 00:03:05.435 LINK fdp 00:03:05.435 LINK reset 00:03:05.435 LINK nvme_compliance 00:03:05.694 LINK dif 00:03:05.694 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:05.953 CC examples/accel/perf/accel_perf.o 00:03:05.953 CC examples/blob/cli/blobcli.o 00:03:05.953 CC examples/blob/hello_world/hello_blob.o 00:03:05.953 LINK cuse 00:03:05.953 LINK hello_blob 00:03:05.953 LINK hello_fsdev 00:03:06.212 LINK accel_perf 00:03:06.212 LINK blobcli 00:03:06.780 CC examples/bdev/hello_world/hello_bdev.o 00:03:06.780 CC examples/bdev/bdevperf/bdevperf.o 00:03:07.040 LINK hello_bdev 00:03:07.299 CC test/bdev/bdevio/bdevio.o 00:03:07.299 LINK bdevperf 00:03:07.559 LINK bdevio 00:03:08.497 LINK esnap 00:03:08.756 CC examples/nvmf/nvmf/nvmf.o 00:03:09.016 LINK nvmf 00:03:10.395 00:03:10.395 real 0m46.929s 00:03:10.395 user 6m19.106s 00:03:10.395 sys 2m35.179s 00:03:10.395 12:23:15 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:10.395 12:23:15 make -- common/autotest_common.sh@10 -- $ set +x 00:03:10.395 ************************************ 00:03:10.395 END TEST make 00:03:10.395 ************************************ 00:03:10.395 12:23:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:10.395 12:23:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:10.395 12:23:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:10.395 12:23:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.395 12:23:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:10.395 12:23:15 -- pm/common@44 -- $ pid=855581 00:03:10.395 12:23:15 -- pm/common@50 -- $ kill -TERM 855581 00:03:10.395 12:23:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.395 12:23:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:10.395 12:23:15 -- pm/common@44 -- $ pid=855583 00:03:10.395 12:23:15 -- pm/common@50 -- $ kill -TERM 855583 00:03:10.395 12:23:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.395 12:23:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:10.395 12:23:15 -- pm/common@44 -- $ pid=855585 00:03:10.395 12:23:15 -- pm/common@50 -- $ kill -TERM 855585 00:03:10.395 12:23:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.395 12:23:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:10.395 12:23:15 -- pm/common@44 -- $ pid=855611 00:03:10.395 12:23:15 -- pm/common@50 -- $ sudo -E kill -TERM 855611 00:03:10.395 12:23:15 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:10.395 12:23:15 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:03:10.395 12:23:15 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:10.395 12:23:15 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:10.395 12:23:15 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:10.655 12:23:16 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:10.655 12:23:16 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:10.655 12:23:16 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:10.655 12:23:16 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:10.655 12:23:16 -- scripts/common.sh@336 -- # IFS=.-: 00:03:10.655 12:23:16 -- scripts/common.sh@336 -- # read -ra ver1 00:03:10.655 12:23:16 -- scripts/common.sh@337 -- # IFS=.-: 00:03:10.655 12:23:16 -- scripts/common.sh@337 -- # read -ra ver2 00:03:10.655 12:23:16 -- scripts/common.sh@338 -- # local 'op=<' 00:03:10.655 12:23:16 -- scripts/common.sh@340 -- # ver1_l=2 00:03:10.655 12:23:16 -- scripts/common.sh@341 -- # ver2_l=1 00:03:10.655 12:23:16 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:10.655 12:23:16 -- scripts/common.sh@344 -- # case "$op" in 00:03:10.655 12:23:16 -- scripts/common.sh@345 -- # : 1 00:03:10.655 12:23:16 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:10.655 12:23:16 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:10.655 12:23:16 -- scripts/common.sh@365 -- # decimal 1 00:03:10.655 12:23:16 -- scripts/common.sh@353 -- # local d=1 00:03:10.655 12:23:16 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:10.655 12:23:16 -- scripts/common.sh@355 -- # echo 1 00:03:10.655 12:23:16 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:10.655 12:23:16 -- scripts/common.sh@366 -- # decimal 2 00:03:10.655 12:23:16 -- scripts/common.sh@353 -- # local d=2 00:03:10.655 12:23:16 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:10.655 12:23:16 -- scripts/common.sh@355 -- # echo 2 00:03:10.655 12:23:16 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:10.655 12:23:16 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:10.655 12:23:16 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:10.655 12:23:16 -- scripts/common.sh@368 -- # return 0 00:03:10.655 12:23:16 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:10.655 12:23:16 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:10.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.655 --rc genhtml_branch_coverage=1 00:03:10.655 --rc genhtml_function_coverage=1 00:03:10.655 --rc genhtml_legend=1 00:03:10.655 --rc geninfo_all_blocks=1 00:03:10.655 --rc geninfo_unexecuted_blocks=1 00:03:10.655 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:10.655 ' 00:03:10.655 12:23:16 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:10.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.655 --rc genhtml_branch_coverage=1 00:03:10.655 --rc genhtml_function_coverage=1 00:03:10.655 --rc genhtml_legend=1 00:03:10.655 --rc geninfo_all_blocks=1 00:03:10.655 --rc geninfo_unexecuted_blocks=1 00:03:10.655 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:10.655 ' 00:03:10.655 12:23:16 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:10.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.655 --rc genhtml_branch_coverage=1 00:03:10.655 --rc genhtml_function_coverage=1 00:03:10.655 --rc genhtml_legend=1 00:03:10.655 --rc geninfo_all_blocks=1 00:03:10.655 --rc geninfo_unexecuted_blocks=1 00:03:10.655 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:10.655 ' 00:03:10.655 12:23:16 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:10.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.655 --rc genhtml_branch_coverage=1 00:03:10.655 --rc genhtml_function_coverage=1 00:03:10.655 --rc genhtml_legend=1 00:03:10.655 --rc geninfo_all_blocks=1 00:03:10.655 --rc geninfo_unexecuted_blocks=1 00:03:10.655 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:10.655 ' 00:03:10.655 12:23:16 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:03:10.655 12:23:16 -- nvmf/common.sh@7 -- # uname -s 00:03:10.655 12:23:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:10.655 12:23:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:10.655 12:23:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:10.655 12:23:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:10.655 12:23:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:10.655 12:23:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:10.655 12:23:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:10.655 12:23:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:10.655 12:23:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:10.655 12:23:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:10.655 12:23:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:03:10.655 12:23:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:03:10.655 12:23:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:10.655 12:23:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:10.655 12:23:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:10.655 12:23:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:10.656 12:23:16 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:10.656 12:23:16 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:10.656 12:23:16 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:10.656 12:23:16 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:10.656 12:23:16 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:10.656 12:23:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.656 12:23:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.656 12:23:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.656 12:23:16 -- paths/export.sh@5 -- # export PATH 00:03:10.656 12:23:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.656 12:23:16 -- nvmf/common.sh@51 -- # : 0 00:03:10.656 12:23:16 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:10.656 12:23:16 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:10.656 12:23:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:10.656 12:23:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:10.656 12:23:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:10.656 12:23:16 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:10.656 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:10.656 12:23:16 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:10.656 12:23:16 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:10.656 12:23:16 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:10.656 12:23:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:10.656 12:23:16 -- spdk/autotest.sh@32 -- # uname -s 00:03:10.656 12:23:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:10.656 12:23:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:10.656 12:23:16 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:10.656 12:23:16 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:10.656 12:23:16 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:10.656 12:23:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:10.656 12:23:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:10.656 12:23:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:10.656 12:23:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:10.656 12:23:16 -- spdk/autotest.sh@48 -- # udevadm_pid=920731 00:03:10.656 12:23:16 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:10.656 12:23:16 -- pm/common@17 -- # local monitor 00:03:10.656 12:23:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.656 12:23:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.656 12:23:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.656 12:23:16 -- pm/common@21 -- # date +%s 00:03:10.656 12:23:16 -- pm/common@21 -- # date +%s 00:03:10.656 12:23:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.656 12:23:16 -- pm/common@25 -- # sleep 1 00:03:10.656 12:23:16 -- pm/common@21 -- # date +%s 00:03:10.656 12:23:16 -- pm/common@21 -- # date +%s 00:03:10.656 12:23:16 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734348196 00:03:10.656 12:23:16 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734348196 00:03:10.656 12:23:16 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734348196 00:03:10.656 12:23:16 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734348196 00:03:10.656 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734348196_collect-cpu-temp.pm.log 00:03:10.656 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734348196_collect-vmstat.pm.log 00:03:10.656 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734348196_collect-cpu-load.pm.log 00:03:10.656 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734348196_collect-bmc-pm.bmc.pm.log 00:03:11.594 12:23:17 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:11.594 12:23:17 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:11.594 12:23:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:11.594 12:23:17 -- common/autotest_common.sh@10 -- # set +x 00:03:11.594 12:23:17 -- spdk/autotest.sh@59 -- # create_test_list 00:03:11.594 12:23:17 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:11.594 12:23:17 -- common/autotest_common.sh@10 -- # set +x 00:03:11.594 12:23:17 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:03:11.854 12:23:17 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:11.854 12:23:17 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:11.854 12:23:17 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:03:11.854 12:23:17 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:11.854 12:23:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:11.854 12:23:17 -- common/autotest_common.sh@1457 -- # uname 00:03:11.854 12:23:17 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:11.854 12:23:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:11.854 12:23:17 -- common/autotest_common.sh@1477 -- # uname 00:03:11.854 12:23:17 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:11.854 12:23:17 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:11.854 12:23:17 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:03:11.854 lcov: LCOV version 1.15 00:03:11.854 12:23:17 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:03:17.129 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:03:22.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:27.694 12:23:32 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:27.694 12:23:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:27.694 12:23:32 -- common/autotest_common.sh@10 -- # set +x 00:03:27.694 12:23:32 -- spdk/autotest.sh@78 -- # rm -f 00:03:27.694 12:23:32 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:30.987 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:30.987 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:30.987 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:30.987 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:30.987 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:30.987 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:30.987 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:30.987 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:30.987 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:30.987 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:30.987 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:30.987 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:30.987 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:30.987 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:30.988 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:31.247 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:31.247 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:31.247 12:23:36 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:31.247 12:23:36 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:31.247 12:23:36 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:31.247 12:23:36 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:31.247 12:23:36 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:31.247 12:23:36 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:31.247 12:23:36 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:31.247 12:23:36 -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:03:31.247 12:23:36 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:31.247 12:23:36 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:31.247 12:23:36 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:31.247 12:23:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:31.247 12:23:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:31.247 12:23:36 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:31.247 12:23:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:31.247 12:23:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:31.247 12:23:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:31.247 12:23:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:31.247 12:23:36 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:31.247 No valid GPT data, bailing 00:03:31.247 12:23:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:31.247 12:23:36 -- scripts/common.sh@394 -- # pt= 00:03:31.247 12:23:36 -- scripts/common.sh@395 -- # return 1 00:03:31.247 12:23:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:31.247 1+0 records in 00:03:31.247 1+0 records out 00:03:31.247 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00159093 s, 659 MB/s 00:03:31.247 12:23:36 -- spdk/autotest.sh@105 -- # sync 00:03:31.247 12:23:36 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:31.247 12:23:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:31.247 12:23:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:39.374 12:23:44 -- spdk/autotest.sh@111 -- # uname -s 00:03:39.374 12:23:44 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:39.374 12:23:44 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:03:39.374 12:23:44 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:39.374 12:23:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.374 12:23:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.374 12:23:44 -- common/autotest_common.sh@10 -- # set +x 00:03:39.374 ************************************ 00:03:39.374 START TEST setup.sh 00:03:39.374 ************************************ 00:03:39.374 12:23:44 setup.sh -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:39.374 * Looking for test storage... 00:03:39.374 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:39.374 12:23:44 setup.sh -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:39.374 12:23:44 setup.sh -- common/autotest_common.sh@1711 -- # lcov --version 00:03:39.374 12:23:44 setup.sh -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:39.374 12:23:44 setup.sh -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@345 -- # : 1 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@353 -- # local d=1 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@355 -- # echo 1 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@353 -- # local d=2 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@355 -- # echo 2 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:39.374 12:23:44 setup.sh -- scripts/common.sh@368 -- # return 0 00:03:39.374 12:23:44 setup.sh -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.374 12:23:44 setup.sh -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:39.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.374 --rc genhtml_branch_coverage=1 00:03:39.374 --rc genhtml_function_coverage=1 00:03:39.374 --rc genhtml_legend=1 00:03:39.374 --rc geninfo_all_blocks=1 00:03:39.374 --rc geninfo_unexecuted_blocks=1 00:03:39.374 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:39.374 ' 00:03:39.374 12:23:44 setup.sh -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:39.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.374 --rc genhtml_branch_coverage=1 00:03:39.374 --rc genhtml_function_coverage=1 00:03:39.374 --rc genhtml_legend=1 00:03:39.374 --rc geninfo_all_blocks=1 00:03:39.374 --rc geninfo_unexecuted_blocks=1 00:03:39.374 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:39.374 ' 00:03:39.374 12:23:44 setup.sh -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:39.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.374 --rc genhtml_branch_coverage=1 00:03:39.374 --rc genhtml_function_coverage=1 00:03:39.374 --rc genhtml_legend=1 00:03:39.374 --rc geninfo_all_blocks=1 00:03:39.374 --rc geninfo_unexecuted_blocks=1 00:03:39.374 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:39.374 ' 00:03:39.374 12:23:44 setup.sh -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:39.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.374 --rc genhtml_branch_coverage=1 00:03:39.374 --rc genhtml_function_coverage=1 00:03:39.374 --rc genhtml_legend=1 00:03:39.374 --rc geninfo_all_blocks=1 00:03:39.374 --rc geninfo_unexecuted_blocks=1 00:03:39.374 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:39.374 ' 00:03:39.374 12:23:44 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:39.374 12:23:44 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:39.374 12:23:44 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:39.374 12:23:44 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.374 12:23:44 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.374 12:23:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:39.374 ************************************ 00:03:39.374 START TEST acl 00:03:39.374 ************************************ 00:03:39.374 12:23:44 setup.sh.acl -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:39.374 * Looking for test storage... 00:03:39.374 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:39.374 12:23:44 setup.sh.acl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:39.374 12:23:44 setup.sh.acl -- common/autotest_common.sh@1711 -- # lcov --version 00:03:39.374 12:23:44 setup.sh.acl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:39.374 12:23:44 setup.sh.acl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:39.374 12:23:44 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.374 12:23:44 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.374 12:23:44 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.374 12:23:44 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.374 12:23:44 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.374 12:23:44 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.374 12:23:44 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.374 12:23:44 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.374 12:23:44 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.374 12:23:44 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.374 12:23:44 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.375 12:23:44 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:03:39.375 12:23:44 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:03:39.375 12:23:44 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.375 12:23:44 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.375 12:23:44 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:03:39.375 12:23:44 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:03:39.375 12:23:44 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.375 12:23:44 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:03:39.375 12:23:44 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.375 12:23:44 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:03:39.375 12:23:44 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:03:39.375 12:23:44 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.375 12:23:44 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:03:39.375 12:23:44 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:03:39.375 12:23:44 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:39.375 12:23:44 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:39.375 12:23:44 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:03:39.375 12:23:44 setup.sh.acl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.375 12:23:44 setup.sh.acl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:39.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.375 --rc genhtml_branch_coverage=1 00:03:39.375 --rc genhtml_function_coverage=1 00:03:39.375 --rc genhtml_legend=1 00:03:39.375 --rc geninfo_all_blocks=1 00:03:39.375 --rc geninfo_unexecuted_blocks=1 00:03:39.375 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:39.375 ' 00:03:39.375 12:23:44 setup.sh.acl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:39.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.375 --rc genhtml_branch_coverage=1 00:03:39.375 --rc genhtml_function_coverage=1 00:03:39.375 --rc genhtml_legend=1 00:03:39.375 --rc geninfo_all_blocks=1 00:03:39.375 --rc geninfo_unexecuted_blocks=1 00:03:39.375 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:39.375 ' 00:03:39.375 12:23:44 setup.sh.acl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:39.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.375 --rc genhtml_branch_coverage=1 00:03:39.375 --rc genhtml_function_coverage=1 00:03:39.375 --rc genhtml_legend=1 00:03:39.375 --rc geninfo_all_blocks=1 00:03:39.375 --rc geninfo_unexecuted_blocks=1 00:03:39.375 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:39.375 ' 00:03:39.375 12:23:44 setup.sh.acl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:39.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.375 --rc genhtml_branch_coverage=1 00:03:39.375 --rc genhtml_function_coverage=1 00:03:39.375 --rc genhtml_legend=1 00:03:39.375 --rc geninfo_all_blocks=1 00:03:39.375 --rc geninfo_unexecuted_blocks=1 00:03:39.375 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:39.375 ' 00:03:39.375 12:23:44 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:39.375 12:23:44 setup.sh.acl -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:39.375 12:23:44 setup.sh.acl -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:39.375 12:23:44 setup.sh.acl -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:39.375 12:23:44 setup.sh.acl -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:39.375 12:23:44 setup.sh.acl -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:39.375 12:23:44 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:39.375 12:23:44 setup.sh.acl -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:03:39.375 12:23:44 setup.sh.acl -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:39.375 12:23:44 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:39.375 12:23:44 setup.sh.acl -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:39.375 12:23:44 setup.sh.acl -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:39.375 12:23:44 setup.sh.acl -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:39.375 12:23:44 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:39.375 12:23:44 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:39.375 12:23:44 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:39.375 12:23:44 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:39.375 12:23:44 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:39.375 12:23:44 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:39.375 12:23:44 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.572 12:23:48 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:43.572 12:23:48 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:43.572 12:23:48 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:43.572 12:23:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.572 12:23:48 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.572 12:23:48 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:46.108 Hugepages 00:03:46.108 node hugesize free / total 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.108 00:03:46.108 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:46.108 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:46.109 12:23:51 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:46.109 12:23:51 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.109 12:23:51 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.109 12:23:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:46.109 ************************************ 00:03:46.109 START TEST denied 00:03:46.109 ************************************ 00:03:46.109 12:23:51 setup.sh.acl.denied -- common/autotest_common.sh@1129 -- # denied 00:03:46.109 12:23:51 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:46.109 12:23:51 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:46.109 12:23:51 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:46.109 12:23:51 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.109 12:23:51 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:50.304 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:50.304 12:23:55 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:50.304 12:23:55 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:50.304 12:23:55 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:50.304 12:23:55 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:50.304 12:23:55 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:50.304 12:23:55 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:50.304 12:23:55 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:50.304 12:23:55 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:50.304 12:23:55 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.304 12:23:55 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.501 00:03:54.501 real 0m8.152s 00:03:54.501 user 0m2.563s 00:03:54.501 sys 0m4.924s 00:03:54.501 12:23:59 setup.sh.acl.denied -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.501 12:23:59 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:54.501 ************************************ 00:03:54.501 END TEST denied 00:03:54.501 ************************************ 00:03:54.501 12:23:59 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:54.501 12:23:59 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.501 12:23:59 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.501 12:23:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:54.501 ************************************ 00:03:54.501 START TEST allowed 00:03:54.501 ************************************ 00:03:54.501 12:23:59 setup.sh.acl.allowed -- common/autotest_common.sh@1129 -- # allowed 00:03:54.501 12:23:59 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:54.501 12:23:59 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:54.501 12:23:59 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:54.501 12:23:59 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.501 12:23:59 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:59.779 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:59.779 12:24:04 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:59.779 12:24:04 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:59.779 12:24:04 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:59.779 12:24:04 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.779 12:24:04 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:03.275 00:04:03.275 real 0m8.908s 00:04:03.275 user 0m2.519s 00:04:03.275 sys 0m5.017s 00:04:03.275 12:24:08 setup.sh.acl.allowed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.275 12:24:08 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:03.275 ************************************ 00:04:03.275 END TEST allowed 00:04:03.275 ************************************ 00:04:03.275 00:04:03.275 real 0m24.397s 00:04:03.275 user 0m7.744s 00:04:03.275 sys 0m14.859s 00:04:03.275 12:24:08 setup.sh.acl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.275 12:24:08 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:03.275 ************************************ 00:04:03.275 END TEST acl 00:04:03.275 ************************************ 00:04:03.275 12:24:08 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:03.275 12:24:08 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.275 12:24:08 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.275 12:24:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:03.275 ************************************ 00:04:03.275 START TEST hugepages 00:04:03.275 ************************************ 00:04:03.275 12:24:08 setup.sh.hugepages -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:03.536 * Looking for test storage... 00:04:03.536 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:03.536 12:24:08 setup.sh.hugepages -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:03.536 12:24:08 setup.sh.hugepages -- common/autotest_common.sh@1711 -- # lcov --version 00:04:03.536 12:24:08 setup.sh.hugepages -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:03.536 12:24:08 setup.sh.hugepages -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:03.536 12:24:08 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.536 12:24:08 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.536 12:24:08 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.536 12:24:08 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.536 12:24:08 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.536 12:24:08 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.536 12:24:08 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.536 12:24:08 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.536 12:24:08 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.536 12:24:08 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.536 12:24:08 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.536 12:24:08 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:04:03.536 12:24:08 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:04:03.536 12:24:08 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.536 12:24:08 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.536 12:24:08 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:04:03.536 12:24:08 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:04:03.536 12:24:08 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.536 12:24:08 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:04:03.537 12:24:08 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.537 12:24:08 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:04:03.537 12:24:08 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:04:03.537 12:24:08 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:04:03.537 12:24:08 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.537 12:24:08 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.537 12:24:08 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.537 12:24:08 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:04:03.537 12:24:08 setup.sh.hugepages -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.537 12:24:08 setup.sh.hugepages -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:03.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.537 --rc genhtml_branch_coverage=1 00:04:03.537 --rc genhtml_function_coverage=1 00:04:03.537 --rc genhtml_legend=1 00:04:03.537 --rc geninfo_all_blocks=1 00:04:03.537 --rc geninfo_unexecuted_blocks=1 00:04:03.537 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:03.537 ' 00:04:03.537 12:24:08 setup.sh.hugepages -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:03.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.537 --rc genhtml_branch_coverage=1 00:04:03.537 --rc genhtml_function_coverage=1 00:04:03.537 --rc genhtml_legend=1 00:04:03.537 --rc geninfo_all_blocks=1 00:04:03.537 --rc geninfo_unexecuted_blocks=1 00:04:03.537 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:03.537 ' 00:04:03.537 12:24:08 setup.sh.hugepages -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:03.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.537 --rc genhtml_branch_coverage=1 00:04:03.537 --rc genhtml_function_coverage=1 00:04:03.537 --rc genhtml_legend=1 00:04:03.537 --rc geninfo_all_blocks=1 00:04:03.537 --rc geninfo_unexecuted_blocks=1 00:04:03.537 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:03.537 ' 00:04:03.537 12:24:08 setup.sh.hugepages -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:03.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.537 --rc genhtml_branch_coverage=1 00:04:03.537 --rc genhtml_function_coverage=1 00:04:03.537 --rc genhtml_legend=1 00:04:03.537 --rc geninfo_all_blocks=1 00:04:03.537 --rc geninfo_unexecuted_blocks=1 00:04:03.537 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:03.537 ' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 40446500 kB' 'MemAvailable: 44166076 kB' 'Buffers: 9316 kB' 'Cached: 11454464 kB' 'SwapCached: 0 kB' 'Active: 8300696 kB' 'Inactive: 3688976 kB' 'Active(anon): 7883044 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529260 kB' 'Mapped: 160900 kB' 'Shmem: 7357152 kB' 'KReclaimable: 219632 kB' 'Slab: 900464 kB' 'SReclaimable: 219632 kB' 'SUnreclaim: 680832 kB' 'KernelStack: 21856 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36433340 kB' 'Committed_AS: 9104760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214288 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:08 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:03.538 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:03.539 12:24:09 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:04:03.539 12:24:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.539 12:24:09 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.539 12:24:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.539 ************************************ 00:04:03.539 START TEST single_node_setup 00:04:03.539 ************************************ 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1129 -- # single_node_setup 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.539 12:24:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:06.830 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:06.830 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:06.830 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:06.830 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:06.830 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:06.830 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:06.830 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:06.830 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:06.830 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:06.830 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:06.830 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:06.830 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:06.830 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:06.830 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:07.089 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:07.089 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:08.470 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42673296 kB' 'MemAvailable: 46392440 kB' 'Buffers: 9316 kB' 'Cached: 11454612 kB' 'SwapCached: 0 kB' 'Active: 8301200 kB' 'Inactive: 3688976 kB' 'Active(anon): 7883548 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530100 kB' 'Mapped: 161096 kB' 'Shmem: 7357300 kB' 'KReclaimable: 218768 kB' 'Slab: 898172 kB' 'SReclaimable: 218768 kB' 'SUnreclaim: 679404 kB' 'KernelStack: 21920 kB' 'PageTables: 8040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 9104804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214416 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.735 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.736 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42671052 kB' 'MemAvailable: 46390196 kB' 'Buffers: 9316 kB' 'Cached: 11454616 kB' 'SwapCached: 0 kB' 'Active: 8300648 kB' 'Inactive: 3688976 kB' 'Active(anon): 7882996 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528480 kB' 'Mapped: 161020 kB' 'Shmem: 7357304 kB' 'KReclaimable: 218768 kB' 'Slab: 898212 kB' 'SReclaimable: 218768 kB' 'SUnreclaim: 679444 kB' 'KernelStack: 21856 kB' 'PageTables: 7672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 9104824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214448 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.737 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.738 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42671324 kB' 'MemAvailable: 46390468 kB' 'Buffers: 9316 kB' 'Cached: 11454632 kB' 'SwapCached: 0 kB' 'Active: 8300204 kB' 'Inactive: 3688976 kB' 'Active(anon): 7882552 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529000 kB' 'Mapped: 161020 kB' 'Shmem: 7357320 kB' 'KReclaimable: 218768 kB' 'Slab: 898212 kB' 'SReclaimable: 218768 kB' 'SUnreclaim: 679444 kB' 'KernelStack: 21824 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 9106264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214448 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.739 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:08.740 nr_hugepages=1024 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:08.740 resv_hugepages=0 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:08.740 surplus_hugepages=0 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:08.740 anon_hugepages=0 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.740 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42676384 kB' 'MemAvailable: 46395528 kB' 'Buffers: 9316 kB' 'Cached: 11454652 kB' 'SwapCached: 0 kB' 'Active: 8300720 kB' 'Inactive: 3688976 kB' 'Active(anon): 7883068 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529028 kB' 'Mapped: 161020 kB' 'Shmem: 7357340 kB' 'KReclaimable: 218768 kB' 'Slab: 898216 kB' 'SReclaimable: 218768 kB' 'SUnreclaim: 679448 kB' 'KernelStack: 21856 kB' 'PageTables: 7940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 9104688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214432 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.741 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26020152 kB' 'MemUsed: 6565216 kB' 'SwapCached: 0 kB' 'Active: 2786540 kB' 'Inactive: 184612 kB' 'Active(anon): 2615508 kB' 'Inactive(anon): 0 kB' 'Active(file): 171032 kB' 'Inactive(file): 184612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2740676 kB' 'Mapped: 69436 kB' 'AnonPages: 233608 kB' 'Shmem: 2385032 kB' 'KernelStack: 12088 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 112144 kB' 'Slab: 431568 kB' 'SReclaimable: 112144 kB' 'SUnreclaim: 319424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.743 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:08.744 node0=1024 expecting 1024 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:08.744 00:04:08.744 real 0m5.199s 00:04:08.744 user 0m1.386s 00:04:08.744 sys 0m2.305s 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.744 12:24:14 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:04:08.744 ************************************ 00:04:08.744 END TEST single_node_setup 00:04:08.744 ************************************ 00:04:09.004 12:24:14 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:04:09.004 12:24:14 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.004 12:24:14 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.004 12:24:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:09.004 ************************************ 00:04:09.004 START TEST even_2G_alloc 00:04:09.004 ************************************ 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1129 -- # even_2G_alloc 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.004 12:24:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:12.312 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:12.312 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:12.312 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:12.312 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:12.312 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:12.312 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:12.312 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:12.312 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:12.312 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:12.312 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:12.312 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:12.312 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:12.312 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:12.312 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:12.312 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:12.312 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:12.312 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42700432 kB' 'MemAvailable: 46419576 kB' 'Buffers: 9316 kB' 'Cached: 11454776 kB' 'SwapCached: 0 kB' 'Active: 8300644 kB' 'Inactive: 3688976 kB' 'Active(anon): 7882992 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528760 kB' 'Mapped: 161008 kB' 'Shmem: 7357464 kB' 'KReclaimable: 218768 kB' 'Slab: 897972 kB' 'SReclaimable: 218768 kB' 'SUnreclaim: 679204 kB' 'KernelStack: 21792 kB' 'PageTables: 7748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 9103080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214368 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.312 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42700640 kB' 'MemAvailable: 46419784 kB' 'Buffers: 9316 kB' 'Cached: 11454792 kB' 'SwapCached: 0 kB' 'Active: 8300628 kB' 'Inactive: 3688976 kB' 'Active(anon): 7882976 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528756 kB' 'Mapped: 161008 kB' 'Shmem: 7357480 kB' 'KReclaimable: 218768 kB' 'Slab: 897940 kB' 'SReclaimable: 218768 kB' 'SUnreclaim: 679172 kB' 'KernelStack: 21776 kB' 'PageTables: 7692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 9103100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214336 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42700940 kB' 'MemAvailable: 46420084 kB' 'Buffers: 9316 kB' 'Cached: 11454796 kB' 'SwapCached: 0 kB' 'Active: 8301072 kB' 'Inactive: 3688976 kB' 'Active(anon): 7883420 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529200 kB' 'Mapped: 161008 kB' 'Shmem: 7357484 kB' 'KReclaimable: 218768 kB' 'Slab: 897940 kB' 'SReclaimable: 218768 kB' 'SUnreclaim: 679172 kB' 'KernelStack: 21792 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 9103120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214336 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.315 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.316 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:12.317 nr_hugepages=1024 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:12.317 resv_hugepages=0 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:12.317 surplus_hugepages=0 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:12.317 anon_hugepages=0 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.317 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42701444 kB' 'MemAvailable: 46420588 kB' 'Buffers: 9316 kB' 'Cached: 11454796 kB' 'SwapCached: 0 kB' 'Active: 8301108 kB' 'Inactive: 3688976 kB' 'Active(anon): 7883456 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529236 kB' 'Mapped: 161008 kB' 'Shmem: 7357484 kB' 'KReclaimable: 218768 kB' 'Slab: 897940 kB' 'SReclaimable: 218768 kB' 'SUnreclaim: 679172 kB' 'KernelStack: 21808 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 9103144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214336 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.318 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27087656 kB' 'MemUsed: 5497712 kB' 'SwapCached: 0 kB' 'Active: 2785804 kB' 'Inactive: 184612 kB' 'Active(anon): 2614772 kB' 'Inactive(anon): 0 kB' 'Active(file): 171032 kB' 'Inactive(file): 184612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2740684 kB' 'Mapped: 69420 kB' 'AnonPages: 232852 kB' 'Shmem: 2385040 kB' 'KernelStack: 12024 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 112144 kB' 'Slab: 431092 kB' 'SReclaimable: 112144 kB' 'SUnreclaim: 318948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.319 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.320 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698408 kB' 'MemFree: 15613940 kB' 'MemUsed: 12084468 kB' 'SwapCached: 0 kB' 'Active: 5515284 kB' 'Inactive: 3504364 kB' 'Active(anon): 5268664 kB' 'Inactive(anon): 0 kB' 'Active(file): 246620 kB' 'Inactive(file): 3504364 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8723492 kB' 'Mapped: 91588 kB' 'AnonPages: 296344 kB' 'Shmem: 4972508 kB' 'KernelStack: 9768 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106624 kB' 'Slab: 466848 kB' 'SReclaimable: 106624 kB' 'SUnreclaim: 360224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.321 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:12.322 node0=512 expecting 512 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:12.322 node1=512 expecting 512 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:04:12.322 00:04:12.322 real 0m3.502s 00:04:12.322 user 0m1.341s 00:04:12.322 sys 0m2.222s 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.322 12:24:17 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:12.322 ************************************ 00:04:12.322 END TEST even_2G_alloc 00:04:12.322 ************************************ 00:04:12.582 12:24:17 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:04:12.582 12:24:17 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.582 12:24:17 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.582 12:24:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:12.582 ************************************ 00:04:12.582 START TEST odd_alloc 00:04:12.582 ************************************ 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1129 -- # odd_alloc 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.582 12:24:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:15.877 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:15.877 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:15.877 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:15.877 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:15.878 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:15.878 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:15.878 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:15.878 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:15.878 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:15.878 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:15.878 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:15.878 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:15.878 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:15.878 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:15.878 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:15.878 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:15.878 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42745220 kB' 'MemAvailable: 46464348 kB' 'Buffers: 9316 kB' 'Cached: 11454948 kB' 'SwapCached: 0 kB' 'Active: 8302172 kB' 'Inactive: 3688976 kB' 'Active(anon): 7884520 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530140 kB' 'Mapped: 159816 kB' 'Shmem: 7357636 kB' 'KReclaimable: 218736 kB' 'Slab: 897736 kB' 'SReclaimable: 218736 kB' 'SUnreclaim: 679000 kB' 'KernelStack: 21776 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480892 kB' 'Committed_AS: 9096320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214352 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.878 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42745988 kB' 'MemAvailable: 46465116 kB' 'Buffers: 9316 kB' 'Cached: 11454952 kB' 'SwapCached: 0 kB' 'Active: 8301788 kB' 'Inactive: 3688976 kB' 'Active(anon): 7884136 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529752 kB' 'Mapped: 159804 kB' 'Shmem: 7357640 kB' 'KReclaimable: 218736 kB' 'Slab: 897744 kB' 'SReclaimable: 218736 kB' 'SUnreclaim: 679008 kB' 'KernelStack: 21760 kB' 'PageTables: 7600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480892 kB' 'Committed_AS: 9096336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214320 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.879 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.880 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42745496 kB' 'MemAvailable: 46464624 kB' 'Buffers: 9316 kB' 'Cached: 11454968 kB' 'SwapCached: 0 kB' 'Active: 8301984 kB' 'Inactive: 3688976 kB' 'Active(anon): 7884332 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530024 kB' 'Mapped: 159856 kB' 'Shmem: 7357656 kB' 'KReclaimable: 218736 kB' 'Slab: 897744 kB' 'SReclaimable: 218736 kB' 'SUnreclaim: 679008 kB' 'KernelStack: 21776 kB' 'PageTables: 7676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480892 kB' 'Committed_AS: 9096360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214288 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.881 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.882 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.145 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:04:16.146 nr_hugepages=1025 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:16.146 resv_hugepages=0 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:16.146 surplus_hugepages=0 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:16.146 anon_hugepages=0 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42746292 kB' 'MemAvailable: 46465420 kB' 'Buffers: 9316 kB' 'Cached: 11454988 kB' 'SwapCached: 0 kB' 'Active: 8301480 kB' 'Inactive: 3688976 kB' 'Active(anon): 7883828 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529440 kB' 'Mapped: 159804 kB' 'Shmem: 7357676 kB' 'KReclaimable: 218736 kB' 'Slab: 897744 kB' 'SReclaimable: 218736 kB' 'SUnreclaim: 679008 kB' 'KernelStack: 21760 kB' 'PageTables: 7600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480892 kB' 'Committed_AS: 9096380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214304 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.146 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.147 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27099524 kB' 'MemUsed: 5485844 kB' 'SwapCached: 0 kB' 'Active: 2786308 kB' 'Inactive: 184612 kB' 'Active(anon): 2615276 kB' 'Inactive(anon): 0 kB' 'Active(file): 171032 kB' 'Inactive(file): 184612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2740712 kB' 'Mapped: 68804 kB' 'AnonPages: 233452 kB' 'Shmem: 2385068 kB' 'KernelStack: 12056 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 112136 kB' 'Slab: 430848 kB' 'SReclaimable: 112136 kB' 'SUnreclaim: 318712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.148 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698408 kB' 'MemFree: 15639384 kB' 'MemUsed: 12059024 kB' 'SwapCached: 0 kB' 'Active: 5520052 kB' 'Inactive: 3504364 kB' 'Active(anon): 5273432 kB' 'Inactive(anon): 0 kB' 'Active(file): 246620 kB' 'Inactive(file): 3504364 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8723632 kB' 'Mapped: 91504 kB' 'AnonPages: 301420 kB' 'Shmem: 4972648 kB' 'KernelStack: 9704 kB' 'PageTables: 3540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106600 kB' 'Slab: 466896 kB' 'SReclaimable: 106600 kB' 'SUnreclaim: 360296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.149 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.150 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.151 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.151 12:24:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.151 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:16.151 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:16.151 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:16.151 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:16.151 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:04:16.151 node0=513 expecting 513 00:04:16.151 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:16.151 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:16.151 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:16.151 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:16.151 node1=512 expecting 512 00:04:16.151 12:24:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:16.151 00:04:16.151 real 0m3.608s 00:04:16.151 user 0m1.390s 00:04:16.151 sys 0m2.285s 00:04:16.151 12:24:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.151 12:24:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:16.151 ************************************ 00:04:16.151 END TEST odd_alloc 00:04:16.151 ************************************ 00:04:16.151 12:24:21 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:04:16.151 12:24:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.151 12:24:21 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.151 12:24:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:16.151 ************************************ 00:04:16.151 START TEST custom_alloc 00:04:16.151 ************************************ 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1129 -- # custom_alloc 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.151 12:24:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:19.445 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.445 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.445 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.445 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.445 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.445 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.445 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.445 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.445 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.445 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.445 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.445 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.445 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.445 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.445 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.445 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.445 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 41675352 kB' 'MemAvailable: 45394480 kB' 'Buffers: 9316 kB' 'Cached: 11455120 kB' 'SwapCached: 0 kB' 'Active: 8303740 kB' 'Inactive: 3688976 kB' 'Active(anon): 7886088 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531668 kB' 'Mapped: 159864 kB' 'Shmem: 7357808 kB' 'KReclaimable: 218736 kB' 'Slab: 897628 kB' 'SReclaimable: 218736 kB' 'SUnreclaim: 678892 kB' 'KernelStack: 22160 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957628 kB' 'Committed_AS: 9099280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214512 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.706 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.707 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 41677876 kB' 'MemAvailable: 45397004 kB' 'Buffers: 9316 kB' 'Cached: 11455124 kB' 'SwapCached: 0 kB' 'Active: 8302360 kB' 'Inactive: 3688976 kB' 'Active(anon): 7884708 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530144 kB' 'Mapped: 159892 kB' 'Shmem: 7357812 kB' 'KReclaimable: 218736 kB' 'Slab: 897628 kB' 'SReclaimable: 218736 kB' 'SUnreclaim: 678892 kB' 'KernelStack: 21920 kB' 'PageTables: 7740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957628 kB' 'Committed_AS: 9099296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214368 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.708 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 41679616 kB' 'MemAvailable: 45398744 kB' 'Buffers: 9316 kB' 'Cached: 11455140 kB' 'SwapCached: 0 kB' 'Active: 8302776 kB' 'Inactive: 3688976 kB' 'Active(anon): 7885124 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530484 kB' 'Mapped: 159840 kB' 'Shmem: 7357828 kB' 'KReclaimable: 218736 kB' 'Slab: 897732 kB' 'SReclaimable: 218736 kB' 'SUnreclaim: 678996 kB' 'KernelStack: 21904 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957628 kB' 'Committed_AS: 9097956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214400 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.709 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:19.710 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:04:19.710 nr_hugepages=1536 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:19.711 resv_hugepages=0 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:19.711 surplus_hugepages=0 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:19.711 anon_hugepages=0 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 41679024 kB' 'MemAvailable: 45398152 kB' 'Buffers: 9316 kB' 'Cached: 11455140 kB' 'SwapCached: 0 kB' 'Active: 8302392 kB' 'Inactive: 3688976 kB' 'Active(anon): 7884740 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530100 kB' 'Mapped: 159840 kB' 'Shmem: 7357828 kB' 'KReclaimable: 218736 kB' 'Slab: 897632 kB' 'SReclaimable: 218736 kB' 'SUnreclaim: 678896 kB' 'KernelStack: 22000 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957628 kB' 'Committed_AS: 9099480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214400 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.711 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27099476 kB' 'MemUsed: 5485892 kB' 'SwapCached: 0 kB' 'Active: 2786720 kB' 'Inactive: 184612 kB' 'Active(anon): 2615688 kB' 'Inactive(anon): 0 kB' 'Active(file): 171032 kB' 'Inactive(file): 184612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2740820 kB' 'Mapped: 68840 kB' 'AnonPages: 233644 kB' 'Shmem: 2385176 kB' 'KernelStack: 12008 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 112136 kB' 'Slab: 430920 kB' 'SReclaimable: 112136 kB' 'SUnreclaim: 318784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.712 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698408 kB' 'MemFree: 14578552 kB' 'MemUsed: 13119856 kB' 'SwapCached: 0 kB' 'Active: 5516324 kB' 'Inactive: 3504364 kB' 'Active(anon): 5269704 kB' 'Inactive(anon): 0 kB' 'Active(file): 246620 kB' 'Inactive(file): 3504364 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8723692 kB' 'Mapped: 91000 kB' 'AnonPages: 297072 kB' 'Shmem: 4972708 kB' 'KernelStack: 9928 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106600 kB' 'Slab: 466712 kB' 'SReclaimable: 106600 kB' 'SUnreclaim: 360112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.713 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:19.714 node0=512 expecting 512 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:04:19.714 node1=1024 expecting 1024 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:19.714 00:04:19.714 real 0m3.648s 00:04:19.714 user 0m1.376s 00:04:19.714 sys 0m2.324s 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.714 12:24:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:19.714 ************************************ 00:04:19.714 END TEST custom_alloc 00:04:19.714 ************************************ 00:04:19.973 12:24:25 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:19.973 12:24:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.973 12:24:25 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.973 12:24:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:19.973 ************************************ 00:04:19.973 START TEST no_shrink_alloc 00:04:19.973 ************************************ 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1129 -- # no_shrink_alloc 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.973 12:24:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:23.265 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:23.265 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:23.265 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:23.265 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:23.265 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:23.265 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:23.265 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:23.265 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:23.265 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:23.265 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:23.265 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:23.265 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:23.265 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:23.265 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:23.265 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:23.265 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:23.265 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:23.265 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:04:23.265 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:23.265 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:23.265 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:23.265 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:23.265 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:23.265 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:23.265 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.265 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:23.265 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.265 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.265 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.265 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.265 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.265 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42759356 kB' 'MemAvailable: 46478484 kB' 'Buffers: 9316 kB' 'Cached: 11455304 kB' 'SwapCached: 0 kB' 'Active: 8302848 kB' 'Inactive: 3688976 kB' 'Active(anon): 7885196 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530516 kB' 'Mapped: 159920 kB' 'Shmem: 7357992 kB' 'KReclaimable: 218736 kB' 'Slab: 898176 kB' 'SReclaimable: 218736 kB' 'SUnreclaim: 679440 kB' 'KernelStack: 21808 kB' 'PageTables: 7648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 9097848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214384 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.266 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42760068 kB' 'MemAvailable: 46479196 kB' 'Buffers: 9316 kB' 'Cached: 11455308 kB' 'SwapCached: 0 kB' 'Active: 8302864 kB' 'Inactive: 3688976 kB' 'Active(anon): 7885212 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530540 kB' 'Mapped: 159824 kB' 'Shmem: 7357996 kB' 'KReclaimable: 218736 kB' 'Slab: 898100 kB' 'SReclaimable: 218736 kB' 'SUnreclaim: 679364 kB' 'KernelStack: 21792 kB' 'PageTables: 7588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 9097864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214352 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.267 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.268 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.531 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.531 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.531 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.531 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.531 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.531 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.531 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.531 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.531 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.531 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.531 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.531 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.531 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.531 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.531 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.531 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.531 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.531 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.531 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.532 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42760328 kB' 'MemAvailable: 46479456 kB' 'Buffers: 9316 kB' 'Cached: 11455328 kB' 'SwapCached: 0 kB' 'Active: 8302696 kB' 'Inactive: 3688976 kB' 'Active(anon): 7885044 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530304 kB' 'Mapped: 159824 kB' 'Shmem: 7358016 kB' 'KReclaimable: 218736 kB' 'Slab: 898092 kB' 'SReclaimable: 218736 kB' 'SUnreclaim: 679356 kB' 'KernelStack: 21776 kB' 'PageTables: 7528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 9097888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214368 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.533 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:23.534 nr_hugepages=1024 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:23.534 resv_hugepages=0 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:23.534 surplus_hugepages=0 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:23.534 anon_hugepages=0 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.534 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42762376 kB' 'MemAvailable: 46481504 kB' 'Buffers: 9316 kB' 'Cached: 11455348 kB' 'SwapCached: 0 kB' 'Active: 8302960 kB' 'Inactive: 3688976 kB' 'Active(anon): 7885308 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530552 kB' 'Mapped: 159824 kB' 'Shmem: 7358036 kB' 'KReclaimable: 218736 kB' 'Slab: 898092 kB' 'SReclaimable: 218736 kB' 'SUnreclaim: 679356 kB' 'KernelStack: 21776 kB' 'PageTables: 7528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 9097908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214368 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.535 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.536 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26062884 kB' 'MemUsed: 6522484 kB' 'SwapCached: 0 kB' 'Active: 2786692 kB' 'Inactive: 184612 kB' 'Active(anon): 2615660 kB' 'Inactive(anon): 0 kB' 'Active(file): 171032 kB' 'Inactive(file): 184612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2740876 kB' 'Mapped: 68824 kB' 'AnonPages: 233632 kB' 'Shmem: 2385232 kB' 'KernelStack: 12072 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 112136 kB' 'Slab: 431108 kB' 'SReclaimable: 112136 kB' 'SUnreclaim: 318972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.537 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:23.538 node0=1024 expecting 1024 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.538 12:24:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:26.840 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:26.840 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:26.840 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:26.840 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:26.840 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:26.840 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:26.840 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:26.840 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:26.840 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:26.840 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:26.840 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:26.840 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:26.840 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:26.840 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:26.840 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:26.840 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:26.840 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:26.840 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42762308 kB' 'MemAvailable: 46481380 kB' 'Buffers: 9316 kB' 'Cached: 11455456 kB' 'SwapCached: 0 kB' 'Active: 8305792 kB' 'Inactive: 3688976 kB' 'Active(anon): 7888140 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533244 kB' 'Mapped: 159936 kB' 'Shmem: 7358144 kB' 'KReclaimable: 218624 kB' 'Slab: 897104 kB' 'SReclaimable: 218624 kB' 'SUnreclaim: 678480 kB' 'KernelStack: 21792 kB' 'PageTables: 7616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 9098520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214400 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.840 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.841 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42763404 kB' 'MemAvailable: 46482476 kB' 'Buffers: 9316 kB' 'Cached: 11455460 kB' 'SwapCached: 0 kB' 'Active: 8305504 kB' 'Inactive: 3688976 kB' 'Active(anon): 7887852 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532960 kB' 'Mapped: 159832 kB' 'Shmem: 7358148 kB' 'KReclaimable: 218624 kB' 'Slab: 897144 kB' 'SReclaimable: 218624 kB' 'SUnreclaim: 678520 kB' 'KernelStack: 21792 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 9098536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214384 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.842 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.843 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42763560 kB' 'MemAvailable: 46482632 kB' 'Buffers: 9316 kB' 'Cached: 11455476 kB' 'SwapCached: 0 kB' 'Active: 8305520 kB' 'Inactive: 3688976 kB' 'Active(anon): 7887868 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532960 kB' 'Mapped: 159832 kB' 'Shmem: 7358164 kB' 'KReclaimable: 218624 kB' 'Slab: 897144 kB' 'SReclaimable: 218624 kB' 'SUnreclaim: 678520 kB' 'KernelStack: 21792 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 9098556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214384 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.844 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.845 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:26.846 nr_hugepages=1024 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:26.846 resv_hugepages=0 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:26.846 surplus_hugepages=0 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:26.846 anon_hugepages=0 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42763560 kB' 'MemAvailable: 46482632 kB' 'Buffers: 9316 kB' 'Cached: 11455496 kB' 'SwapCached: 0 kB' 'Active: 8305408 kB' 'Inactive: 3688976 kB' 'Active(anon): 7887756 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 3688976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532800 kB' 'Mapped: 159832 kB' 'Shmem: 7358184 kB' 'KReclaimable: 218624 kB' 'Slab: 897144 kB' 'SReclaimable: 218624 kB' 'SUnreclaim: 678520 kB' 'KernelStack: 21776 kB' 'PageTables: 7548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 9098580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214400 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 513396 kB' 'DirectMap2M: 10706944 kB' 'DirectMap1G: 58720256 kB' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.846 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.847 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26063060 kB' 'MemUsed: 6522308 kB' 'SwapCached: 0 kB' 'Active: 2787540 kB' 'Inactive: 184612 kB' 'Active(anon): 2616508 kB' 'Inactive(anon): 0 kB' 'Active(file): 171032 kB' 'Inactive(file): 184612 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2740884 kB' 'Mapped: 68832 kB' 'AnonPages: 234372 kB' 'Shmem: 2385240 kB' 'KernelStack: 12056 kB' 'PageTables: 3960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 112080 kB' 'Slab: 430376 kB' 'SReclaimable: 112080 kB' 'SUnreclaim: 318296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.848 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:26.849 node0=1024 expecting 1024 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:26.849 00:04:26.849 real 0m7.005s 00:04:26.849 user 0m2.602s 00:04:26.849 sys 0m4.481s 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.849 12:24:32 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:26.849 ************************************ 00:04:26.849 END TEST no_shrink_alloc 00:04:26.849 ************************************ 00:04:26.849 12:24:32 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:04:26.849 12:24:32 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:26.849 12:24:32 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:26.849 12:24:32 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.849 12:24:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:26.849 12:24:32 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.849 12:24:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:26.849 12:24:32 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:26.849 12:24:32 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.849 12:24:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:26.849 12:24:32 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.849 12:24:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:26.849 12:24:32 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:26.849 12:24:32 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:26.849 00:04:26.849 real 0m23.619s 00:04:26.849 user 0m8.394s 00:04:26.849 sys 0m14.020s 00:04:26.849 12:24:32 setup.sh.hugepages -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.849 12:24:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:26.849 ************************************ 00:04:26.849 END TEST hugepages 00:04:26.849 ************************************ 00:04:27.109 12:24:32 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:27.109 12:24:32 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.109 12:24:32 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.109 12:24:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:27.109 ************************************ 00:04:27.109 START TEST driver 00:04:27.109 ************************************ 00:04:27.109 12:24:32 setup.sh.driver -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:27.109 * Looking for test storage... 00:04:27.109 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:27.109 12:24:32 setup.sh.driver -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:27.109 12:24:32 setup.sh.driver -- common/autotest_common.sh@1711 -- # lcov --version 00:04:27.109 12:24:32 setup.sh.driver -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:27.109 12:24:32 setup.sh.driver -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:27.109 12:24:32 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.109 12:24:32 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.109 12:24:32 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.109 12:24:32 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.109 12:24:32 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.109 12:24:32 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.109 12:24:32 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.109 12:24:32 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.109 12:24:32 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.109 12:24:32 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.109 12:24:32 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.109 12:24:32 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:04:27.109 12:24:32 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:04:27.109 12:24:32 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.109 12:24:32 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.109 12:24:32 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:04:27.109 12:24:32 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:04:27.109 12:24:32 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.109 12:24:32 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:04:27.109 12:24:32 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.369 12:24:32 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:04:27.369 12:24:32 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:04:27.369 12:24:32 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.369 12:24:32 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:04:27.369 12:24:32 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.369 12:24:32 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.369 12:24:32 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.369 12:24:32 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:04:27.369 12:24:32 setup.sh.driver -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.369 12:24:32 setup.sh.driver -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:27.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.369 --rc genhtml_branch_coverage=1 00:04:27.369 --rc genhtml_function_coverage=1 00:04:27.369 --rc genhtml_legend=1 00:04:27.369 --rc geninfo_all_blocks=1 00:04:27.369 --rc geninfo_unexecuted_blocks=1 00:04:27.369 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:27.369 ' 00:04:27.369 12:24:32 setup.sh.driver -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:27.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.369 --rc genhtml_branch_coverage=1 00:04:27.369 --rc genhtml_function_coverage=1 00:04:27.369 --rc genhtml_legend=1 00:04:27.369 --rc geninfo_all_blocks=1 00:04:27.369 --rc geninfo_unexecuted_blocks=1 00:04:27.369 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:27.369 ' 00:04:27.369 12:24:32 setup.sh.driver -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:27.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.369 --rc genhtml_branch_coverage=1 00:04:27.369 --rc genhtml_function_coverage=1 00:04:27.369 --rc genhtml_legend=1 00:04:27.369 --rc geninfo_all_blocks=1 00:04:27.369 --rc geninfo_unexecuted_blocks=1 00:04:27.369 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:27.369 ' 00:04:27.369 12:24:32 setup.sh.driver -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:27.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.369 --rc genhtml_branch_coverage=1 00:04:27.369 --rc genhtml_function_coverage=1 00:04:27.369 --rc genhtml_legend=1 00:04:27.369 --rc geninfo_all_blocks=1 00:04:27.369 --rc geninfo_unexecuted_blocks=1 00:04:27.369 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:27.369 ' 00:04:27.369 12:24:32 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:27.369 12:24:32 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.369 12:24:32 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:32.645 12:24:37 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:32.645 12:24:37 setup.sh.driver -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.645 12:24:37 setup.sh.driver -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.645 12:24:37 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:32.645 ************************************ 00:04:32.645 START TEST guess_driver 00:04:32.645 ************************************ 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@1129 -- # guess_driver 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:32.645 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:32.645 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:32.645 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:32.645 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:32.645 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:32.645 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:32.645 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:32.645 Looking for driver=vfio-pci 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.645 12:24:37 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:35.181 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.181 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.181 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.182 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.441 12:24:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.820 12:24:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.820 12:24:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.820 12:24:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.079 12:24:42 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:37.079 12:24:42 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:37.079 12:24:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.079 12:24:42 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:42.355 00:04:42.355 real 0m9.688s 00:04:42.355 user 0m2.690s 00:04:42.355 sys 0m4.863s 00:04:42.355 12:24:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.355 12:24:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:42.355 ************************************ 00:04:42.355 END TEST guess_driver 00:04:42.355 ************************************ 00:04:42.355 00:04:42.355 real 0m14.580s 00:04:42.355 user 0m4.132s 00:04:42.355 sys 0m7.532s 00:04:42.355 12:24:47 setup.sh.driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.355 12:24:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:42.355 ************************************ 00:04:42.355 END TEST driver 00:04:42.355 ************************************ 00:04:42.355 12:24:47 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:42.355 12:24:47 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.355 12:24:47 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.355 12:24:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:42.355 ************************************ 00:04:42.355 START TEST devices 00:04:42.355 ************************************ 00:04:42.355 12:24:47 setup.sh.devices -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:42.355 * Looking for test storage... 00:04:42.355 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:42.355 12:24:47 setup.sh.devices -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:42.355 12:24:47 setup.sh.devices -- common/autotest_common.sh@1711 -- # lcov --version 00:04:42.355 12:24:47 setup.sh.devices -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:42.355 12:24:47 setup.sh.devices -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.355 12:24:47 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:04:42.355 12:24:47 setup.sh.devices -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.355 12:24:47 setup.sh.devices -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.355 --rc genhtml_branch_coverage=1 00:04:42.355 --rc genhtml_function_coverage=1 00:04:42.355 --rc genhtml_legend=1 00:04:42.355 --rc geninfo_all_blocks=1 00:04:42.355 --rc geninfo_unexecuted_blocks=1 00:04:42.355 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:42.355 ' 00:04:42.355 12:24:47 setup.sh.devices -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.355 --rc genhtml_branch_coverage=1 00:04:42.355 --rc genhtml_function_coverage=1 00:04:42.355 --rc genhtml_legend=1 00:04:42.355 --rc geninfo_all_blocks=1 00:04:42.355 --rc geninfo_unexecuted_blocks=1 00:04:42.355 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:42.355 ' 00:04:42.355 12:24:47 setup.sh.devices -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.355 --rc genhtml_branch_coverage=1 00:04:42.355 --rc genhtml_function_coverage=1 00:04:42.355 --rc genhtml_legend=1 00:04:42.355 --rc geninfo_all_blocks=1 00:04:42.355 --rc geninfo_unexecuted_blocks=1 00:04:42.355 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:42.355 ' 00:04:42.355 12:24:47 setup.sh.devices -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.355 --rc genhtml_branch_coverage=1 00:04:42.355 --rc genhtml_function_coverage=1 00:04:42.355 --rc genhtml_legend=1 00:04:42.355 --rc geninfo_all_blocks=1 00:04:42.355 --rc geninfo_unexecuted_blocks=1 00:04:42.355 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:42.355 ' 00:04:42.355 12:24:47 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:42.355 12:24:47 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:42.355 12:24:47 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.355 12:24:47 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:45.653 12:24:50 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:45.653 12:24:50 setup.sh.devices -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:45.653 12:24:50 setup.sh.devices -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:45.653 12:24:50 setup.sh.devices -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:45.653 12:24:50 setup.sh.devices -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:45.653 12:24:50 setup.sh.devices -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:45.653 12:24:50 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:45.653 12:24:50 setup.sh.devices -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:04:45.653 12:24:50 setup.sh.devices -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:45.653 12:24:50 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:45.653 12:24:50 setup.sh.devices -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:45.653 12:24:50 setup.sh.devices -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:45.653 12:24:50 setup.sh.devices -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:45.653 12:24:50 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:45.653 12:24:50 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:45.653 12:24:50 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:45.653 12:24:50 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:45.653 12:24:50 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:45.653 12:24:50 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:45.653 12:24:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:45.653 12:24:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:45.653 12:24:50 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:45.653 12:24:50 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:45.653 12:24:50 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:45.653 12:24:50 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:04:45.653 12:24:50 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:45.653 No valid GPT data, bailing 00:04:45.653 12:24:50 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:45.653 12:24:50 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:04:45.653 12:24:50 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:04:45.653 12:24:50 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:45.653 12:24:50 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:45.653 12:24:50 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:45.653 12:24:50 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:45.653 12:24:50 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:45.653 12:24:50 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:45.653 12:24:50 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:45.653 12:24:50 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:45.653 12:24:50 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:45.653 12:24:50 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:45.653 12:24:50 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.653 12:24:50 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.653 12:24:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:45.653 ************************************ 00:04:45.653 START TEST nvme_mount 00:04:45.653 ************************************ 00:04:45.653 12:24:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1129 -- # nvme_mount 00:04:45.653 12:24:50 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:45.653 12:24:50 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:45.653 12:24:50 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.653 12:24:50 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:45.653 12:24:50 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:45.653 12:24:50 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:45.653 12:24:50 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:45.653 12:24:50 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:45.653 12:24:50 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:45.654 12:24:50 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:45.654 12:24:50 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:45.654 12:24:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:45.654 12:24:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:45.654 12:24:50 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:45.654 12:24:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:45.654 12:24:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:45.654 12:24:50 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:45.654 12:24:50 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:45.654 12:24:50 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:46.593 Creating new GPT entries in memory. 00:04:46.593 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:46.593 other utilities. 00:04:46.593 12:24:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:46.593 12:24:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:46.593 12:24:52 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:46.593 12:24:52 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:46.593 12:24:52 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:47.531 Creating new GPT entries in memory. 00:04:47.531 The operation has completed successfully. 00:04:47.531 12:24:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:47.531 12:24:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:47.531 12:24:53 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 953607 00:04:47.790 12:24:53 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.790 12:24:53 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:47.790 12:24:53 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.790 12:24:53 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:47.790 12:24:53 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:47.790 12:24:53 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.790 12:24:53 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.790 12:24:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:47.790 12:24:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:47.790 12:24:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.790 12:24:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.790 12:24:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:47.790 12:24:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:47.790 12:24:53 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:47.791 12:24:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:47.791 12:24:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.791 12:24:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:47.791 12:24:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:47.791 12:24:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.791 12:24:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.326 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.585 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.585 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:50.585 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:50.585 12:24:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.585 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.585 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:50.585 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.585 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:50.585 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:50.585 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:50.585 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.585 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.844 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.844 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:50.844 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:50.844 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.844 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:51.102 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:51.102 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:51.102 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:51.102 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:51.103 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:51.103 12:24:56 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:51.103 12:24:56 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.103 12:24:56 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:51.103 12:24:56 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:51.103 12:24:56 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.103 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.103 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:51.103 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:51.103 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.103 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.103 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:51.103 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:51.103 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:51.103 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:51.103 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.103 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:51.103 12:24:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:51.103 12:24:56 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.103 12:24:56 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.394 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.395 12:24:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:57.685 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.685 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.685 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.685 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.685 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.685 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.685 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.685 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:57.686 12:25:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.686 12:25:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:57.686 12:25:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:57.686 12:25:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:57.686 12:25:03 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:57.686 12:25:03 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.686 12:25:03 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:57.686 12:25:03 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:57.686 12:25:03 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:57.686 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:57.686 00:04:57.686 real 0m12.161s 00:04:57.686 user 0m3.473s 00:04:57.686 sys 0m6.490s 00:04:57.686 12:25:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.686 12:25:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:57.686 ************************************ 00:04:57.686 END TEST nvme_mount 00:04:57.686 ************************************ 00:04:57.686 12:25:03 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:57.686 12:25:03 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.686 12:25:03 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.686 12:25:03 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:57.686 ************************************ 00:04:57.686 START TEST dm_mount 00:04:57.686 ************************************ 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@1129 -- # dm_mount 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:57.686 12:25:03 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:59.065 Creating new GPT entries in memory. 00:04:59.065 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:59.065 other utilities. 00:04:59.065 12:25:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:59.065 12:25:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.065 12:25:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:59.065 12:25:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:59.065 12:25:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:00.001 Creating new GPT entries in memory. 00:05:00.001 The operation has completed successfully. 00:05:00.001 12:25:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:00.001 12:25:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.001 12:25:05 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:00.001 12:25:05 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:00.001 12:25:05 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:00.940 The operation has completed successfully. 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 958031 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.940 12:25:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:04.248 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.609 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.609 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:04.609 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:04.609 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:04.609 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:04.609 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:04.609 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:04.609 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:04.609 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:04.609 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:04.609 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:04.609 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:04.609 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:04.609 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:04.609 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.609 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:04.609 12:25:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:04.609 12:25:09 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.609 12:25:09 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.964 12:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.964 12:25:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.964 12:25:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:07.964 12:25:13 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:07.964 12:25:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.964 12:25:13 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:07.964 12:25:13 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:07.964 12:25:13 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:07.964 12:25:13 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:07.965 12:25:13 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:07.965 12:25:13 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:07.965 12:25:13 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:07.965 12:25:13 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.965 12:25:13 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:07.965 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:07.965 12:25:13 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:07.965 12:25:13 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:07.965 00:05:07.965 real 0m10.011s 00:05:07.965 user 0m2.446s 00:05:07.965 sys 0m4.656s 00:05:07.965 12:25:13 setup.sh.devices.dm_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.965 12:25:13 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:07.965 ************************************ 00:05:07.965 END TEST dm_mount 00:05:07.965 ************************************ 00:05:07.965 12:25:13 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:07.965 12:25:13 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:07.965 12:25:13 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.965 12:25:13 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.965 12:25:13 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:07.965 12:25:13 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:07.965 12:25:13 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:08.290 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:08.290 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:05:08.290 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:08.290 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:08.290 12:25:13 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:08.290 12:25:13 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:08.290 12:25:13 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:08.290 12:25:13 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.290 12:25:13 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:08.290 12:25:13 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.290 12:25:13 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:08.290 00:05:08.290 real 0m26.444s 00:05:08.290 user 0m7.374s 00:05:08.290 sys 0m13.828s 00:05:08.290 12:25:13 setup.sh.devices -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.290 12:25:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:08.290 ************************************ 00:05:08.290 END TEST devices 00:05:08.290 ************************************ 00:05:08.290 00:05:08.290 real 1m29.531s 00:05:08.290 user 0m27.851s 00:05:08.290 sys 0m50.560s 00:05:08.290 12:25:13 setup.sh -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.291 12:25:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:08.291 ************************************ 00:05:08.291 END TEST setup.sh 00:05:08.291 ************************************ 00:05:08.291 12:25:13 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:11.583 Hugepages 00:05:11.583 node hugesize free / total 00:05:11.583 node0 1048576kB 0 / 0 00:05:11.583 node0 2048kB 1024 / 1024 00:05:11.583 node1 1048576kB 0 / 0 00:05:11.583 node1 2048kB 1024 / 1024 00:05:11.583 00:05:11.583 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:11.583 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:11.583 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:11.583 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:11.583 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:11.583 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:11.583 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:11.583 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:11.583 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:11.583 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:11.583 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:11.583 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:11.583 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:11.583 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:11.583 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:11.583 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:11.583 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:11.583 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:11.842 12:25:17 -- spdk/autotest.sh@117 -- # uname -s 00:05:11.842 12:25:17 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:11.842 12:25:17 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:11.842 12:25:17 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:15.129 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:15.129 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:15.129 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:15.129 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:15.129 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:15.129 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:15.129 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:15.129 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:15.129 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:15.129 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:15.129 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:15.129 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:15.129 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:15.129 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:15.129 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:15.129 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:17.033 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:17.033 12:25:22 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:17.968 12:25:23 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:17.968 12:25:23 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:17.968 12:25:23 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:17.968 12:25:23 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:17.968 12:25:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:17.968 12:25:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:17.968 12:25:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:17.968 12:25:23 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:17.968 12:25:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:17.968 12:25:23 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:17.968 12:25:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:05:17.968 12:25:23 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:21.251 Waiting for block devices as requested 00:05:21.251 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:21.251 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:21.251 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:21.251 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:21.251 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:21.251 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:21.251 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:21.510 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:21.510 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:21.510 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:21.510 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:21.769 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:21.769 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:21.769 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:22.028 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:22.028 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:22.287 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:22.287 12:25:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:22.287 12:25:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:22.287 12:25:27 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:05:22.287 12:25:27 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:22.287 12:25:27 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:22.287 12:25:27 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:22.287 12:25:27 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:22.287 12:25:27 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:22.287 12:25:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:22.287 12:25:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:22.287 12:25:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:22.287 12:25:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:22.287 12:25:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:22.287 12:25:27 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:05:22.287 12:25:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:22.287 12:25:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:22.287 12:25:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:22.287 12:25:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:22.287 12:25:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:22.287 12:25:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:22.287 12:25:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:22.287 12:25:27 -- common/autotest_common.sh@1543 -- # continue 00:05:22.287 12:25:27 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:22.287 12:25:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:22.287 12:25:27 -- common/autotest_common.sh@10 -- # set +x 00:05:22.546 12:25:27 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:22.546 12:25:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:22.546 12:25:27 -- common/autotest_common.sh@10 -- # set +x 00:05:22.546 12:25:27 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:25.835 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:25.835 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:25.835 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:25.835 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:25.835 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:25.835 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:25.835 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:25.835 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:25.835 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:25.835 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:25.835 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:25.835 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:25.835 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:25.835 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:25.835 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:25.835 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:27.214 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:27.472 12:25:32 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:27.473 12:25:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:27.473 12:25:32 -- common/autotest_common.sh@10 -- # set +x 00:05:27.473 12:25:32 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:27.473 12:25:32 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:27.473 12:25:32 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:27.473 12:25:32 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:27.473 12:25:32 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:27.473 12:25:32 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:27.473 12:25:32 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:27.473 12:25:32 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:27.473 12:25:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:27.473 12:25:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:27.473 12:25:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:27.473 12:25:32 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:27.473 12:25:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:27.473 12:25:32 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:27.473 12:25:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:05:27.473 12:25:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:27.473 12:25:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:27.473 12:25:32 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:27.473 12:25:32 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:27.473 12:25:32 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:27.473 12:25:32 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:27.473 12:25:32 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:05:27.473 12:25:32 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:05:27.473 12:25:32 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=967585 00:05:27.473 12:25:32 -- common/autotest_common.sh@1585 -- # waitforlisten 967585 00:05:27.473 12:25:32 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.473 12:25:32 -- common/autotest_common.sh@835 -- # '[' -z 967585 ']' 00:05:27.473 12:25:32 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.473 12:25:32 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.473 12:25:32 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.473 12:25:32 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.473 12:25:32 -- common/autotest_common.sh@10 -- # set +x 00:05:27.473 [2024-12-16 12:25:33.003813] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:27.473 [2024-12-16 12:25:33.003878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid967585 ] 00:05:27.732 [2024-12-16 12:25:33.077590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.732 [2024-12-16 12:25:33.120672] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.992 12:25:33 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.992 12:25:33 -- common/autotest_common.sh@868 -- # return 0 00:05:27.992 12:25:33 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:27.992 12:25:33 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:27.992 12:25:33 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:31.282 nvme0n1 00:05:31.282 12:25:36 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:31.282 [2024-12-16 12:25:36.518318] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:31.282 request: 00:05:31.282 { 00:05:31.282 "nvme_ctrlr_name": "nvme0", 00:05:31.282 "password": "test", 00:05:31.282 "method": "bdev_nvme_opal_revert", 00:05:31.282 "req_id": 1 00:05:31.282 } 00:05:31.282 Got JSON-RPC error response 00:05:31.282 response: 00:05:31.282 { 00:05:31.282 "code": -32602, 00:05:31.282 "message": "Invalid parameters" 00:05:31.282 } 00:05:31.282 12:25:36 -- common/autotest_common.sh@1591 -- # true 00:05:31.282 12:25:36 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:31.282 12:25:36 -- common/autotest_common.sh@1595 -- # killprocess 967585 00:05:31.282 12:25:36 -- common/autotest_common.sh@954 -- # '[' -z 967585 ']' 00:05:31.282 12:25:36 -- common/autotest_common.sh@958 -- # kill -0 967585 00:05:31.282 12:25:36 -- common/autotest_common.sh@959 -- # uname 00:05:31.282 12:25:36 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.282 12:25:36 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 967585 00:05:31.282 12:25:36 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.282 12:25:36 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.282 12:25:36 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 967585' 00:05:31.282 killing process with pid 967585 00:05:31.282 12:25:36 -- common/autotest_common.sh@973 -- # kill 967585 00:05:31.282 12:25:36 -- common/autotest_common.sh@978 -- # wait 967585 00:05:33.188 12:25:38 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:33.188 12:25:38 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:33.188 12:25:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:33.188 12:25:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:33.188 12:25:38 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:33.188 12:25:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.188 12:25:38 -- common/autotest_common.sh@10 -- # set +x 00:05:33.188 12:25:38 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:33.188 12:25:38 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:33.188 12:25:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.188 12:25:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.188 12:25:38 -- common/autotest_common.sh@10 -- # set +x 00:05:33.188 ************************************ 00:05:33.188 START TEST env 00:05:33.188 ************************************ 00:05:33.188 12:25:38 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:33.447 * Looking for test storage... 00:05:33.447 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:33.448 12:25:38 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:33.448 12:25:38 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:33.448 12:25:38 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:33.448 12:25:38 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:33.448 12:25:38 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.448 12:25:38 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.448 12:25:38 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.448 12:25:38 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.448 12:25:38 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.448 12:25:38 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.448 12:25:38 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.448 12:25:38 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.448 12:25:38 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.448 12:25:38 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.448 12:25:38 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.448 12:25:38 env -- scripts/common.sh@344 -- # case "$op" in 00:05:33.448 12:25:38 env -- scripts/common.sh@345 -- # : 1 00:05:33.448 12:25:38 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.448 12:25:38 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.448 12:25:38 env -- scripts/common.sh@365 -- # decimal 1 00:05:33.448 12:25:38 env -- scripts/common.sh@353 -- # local d=1 00:05:33.448 12:25:38 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.448 12:25:38 env -- scripts/common.sh@355 -- # echo 1 00:05:33.448 12:25:38 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.448 12:25:38 env -- scripts/common.sh@366 -- # decimal 2 00:05:33.448 12:25:38 env -- scripts/common.sh@353 -- # local d=2 00:05:33.448 12:25:38 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.448 12:25:38 env -- scripts/common.sh@355 -- # echo 2 00:05:33.448 12:25:38 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.448 12:25:38 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.448 12:25:38 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.448 12:25:38 env -- scripts/common.sh@368 -- # return 0 00:05:33.448 12:25:38 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.448 12:25:38 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:33.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.448 --rc genhtml_branch_coverage=1 00:05:33.448 --rc genhtml_function_coverage=1 00:05:33.448 --rc genhtml_legend=1 00:05:33.448 --rc geninfo_all_blocks=1 00:05:33.448 --rc geninfo_unexecuted_blocks=1 00:05:33.448 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.448 ' 00:05:33.448 12:25:38 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:33.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.448 --rc genhtml_branch_coverage=1 00:05:33.448 --rc genhtml_function_coverage=1 00:05:33.448 --rc genhtml_legend=1 00:05:33.448 --rc geninfo_all_blocks=1 00:05:33.448 --rc geninfo_unexecuted_blocks=1 00:05:33.448 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.448 ' 00:05:33.448 12:25:38 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:33.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.448 --rc genhtml_branch_coverage=1 00:05:33.448 --rc genhtml_function_coverage=1 00:05:33.448 --rc genhtml_legend=1 00:05:33.448 --rc geninfo_all_blocks=1 00:05:33.448 --rc geninfo_unexecuted_blocks=1 00:05:33.448 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.448 ' 00:05:33.448 12:25:38 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:33.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.448 --rc genhtml_branch_coverage=1 00:05:33.448 --rc genhtml_function_coverage=1 00:05:33.448 --rc genhtml_legend=1 00:05:33.448 --rc geninfo_all_blocks=1 00:05:33.448 --rc geninfo_unexecuted_blocks=1 00:05:33.448 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.448 ' 00:05:33.448 12:25:38 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:33.448 12:25:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.448 12:25:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.448 12:25:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.448 ************************************ 00:05:33.448 START TEST env_memory 00:05:33.448 ************************************ 00:05:33.448 12:25:38 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:33.448 00:05:33.448 00:05:33.448 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.448 http://cunit.sourceforge.net/ 00:05:33.448 00:05:33.448 00:05:33.448 Suite: memory 00:05:33.708 Test: alloc and free memory map ...[2024-12-16 12:25:39.012859] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:33.708 passed 00:05:33.708 Test: mem map translation ...[2024-12-16 12:25:39.025504] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:33.708 [2024-12-16 12:25:39.025519] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:33.708 [2024-12-16 12:25:39.025564] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:33.708 [2024-12-16 12:25:39.025574] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:33.708 passed 00:05:33.708 Test: mem map registration ...[2024-12-16 12:25:39.045625] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:33.708 [2024-12-16 12:25:39.045641] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:33.708 passed 00:05:33.708 Test: mem map adjacent registrations ...passed 00:05:33.708 00:05:33.708 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.708 suites 1 1 n/a 0 0 00:05:33.708 tests 4 4 4 0 0 00:05:33.708 asserts 152 152 152 0 n/a 00:05:33.708 00:05:33.708 Elapsed time = 0.081 seconds 00:05:33.708 00:05:33.708 real 0m0.095s 00:05:33.708 user 0m0.081s 00:05:33.708 sys 0m0.013s 00:05:33.708 12:25:39 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.708 12:25:39 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:33.708 ************************************ 00:05:33.708 END TEST env_memory 00:05:33.708 ************************************ 00:05:33.708 12:25:39 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:33.708 12:25:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.708 12:25:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.708 12:25:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.708 ************************************ 00:05:33.708 START TEST env_vtophys 00:05:33.708 ************************************ 00:05:33.708 12:25:39 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:33.708 EAL: lib.eal log level changed from notice to debug 00:05:33.708 EAL: Detected lcore 0 as core 0 on socket 0 00:05:33.708 EAL: Detected lcore 1 as core 1 on socket 0 00:05:33.708 EAL: Detected lcore 2 as core 2 on socket 0 00:05:33.708 EAL: Detected lcore 3 as core 3 on socket 0 00:05:33.708 EAL: Detected lcore 4 as core 4 on socket 0 00:05:33.708 EAL: Detected lcore 5 as core 5 on socket 0 00:05:33.708 EAL: Detected lcore 6 as core 6 on socket 0 00:05:33.708 EAL: Detected lcore 7 as core 8 on socket 0 00:05:33.708 EAL: Detected lcore 8 as core 9 on socket 0 00:05:33.708 EAL: Detected lcore 9 as core 10 on socket 0 00:05:33.708 EAL: Detected lcore 10 as core 11 on socket 0 00:05:33.708 EAL: Detected lcore 11 as core 12 on socket 0 00:05:33.708 EAL: Detected lcore 12 as core 13 on socket 0 00:05:33.709 EAL: Detected lcore 13 as core 14 on socket 0 00:05:33.709 EAL: Detected lcore 14 as core 16 on socket 0 00:05:33.709 EAL: Detected lcore 15 as core 17 on socket 0 00:05:33.709 EAL: Detected lcore 16 as core 18 on socket 0 00:05:33.709 EAL: Detected lcore 17 as core 19 on socket 0 00:05:33.709 EAL: Detected lcore 18 as core 20 on socket 0 00:05:33.709 EAL: Detected lcore 19 as core 21 on socket 0 00:05:33.709 EAL: Detected lcore 20 as core 22 on socket 0 00:05:33.709 EAL: Detected lcore 21 as core 24 on socket 0 00:05:33.709 EAL: Detected lcore 22 as core 25 on socket 0 00:05:33.709 EAL: Detected lcore 23 as core 26 on socket 0 00:05:33.709 EAL: Detected lcore 24 as core 27 on socket 0 00:05:33.709 EAL: Detected lcore 25 as core 28 on socket 0 00:05:33.709 EAL: Detected lcore 26 as core 29 on socket 0 00:05:33.709 EAL: Detected lcore 27 as core 30 on socket 0 00:05:33.709 EAL: Detected lcore 28 as core 0 on socket 1 00:05:33.709 EAL: Detected lcore 29 as core 1 on socket 1 00:05:33.709 EAL: Detected lcore 30 as core 2 on socket 1 00:05:33.709 EAL: Detected lcore 31 as core 3 on socket 1 00:05:33.709 EAL: Detected lcore 32 as core 4 on socket 1 00:05:33.709 EAL: Detected lcore 33 as core 5 on socket 1 00:05:33.709 EAL: Detected lcore 34 as core 6 on socket 1 00:05:33.709 EAL: Detected lcore 35 as core 8 on socket 1 00:05:33.709 EAL: Detected lcore 36 as core 9 on socket 1 00:05:33.709 EAL: Detected lcore 37 as core 10 on socket 1 00:05:33.709 EAL: Detected lcore 38 as core 11 on socket 1 00:05:33.709 EAL: Detected lcore 39 as core 12 on socket 1 00:05:33.709 EAL: Detected lcore 40 as core 13 on socket 1 00:05:33.709 EAL: Detected lcore 41 as core 14 on socket 1 00:05:33.709 EAL: Detected lcore 42 as core 16 on socket 1 00:05:33.709 EAL: Detected lcore 43 as core 17 on socket 1 00:05:33.709 EAL: Detected lcore 44 as core 18 on socket 1 00:05:33.709 EAL: Detected lcore 45 as core 19 on socket 1 00:05:33.709 EAL: Detected lcore 46 as core 20 on socket 1 00:05:33.709 EAL: Detected lcore 47 as core 21 on socket 1 00:05:33.709 EAL: Detected lcore 48 as core 22 on socket 1 00:05:33.709 EAL: Detected lcore 49 as core 24 on socket 1 00:05:33.709 EAL: Detected lcore 50 as core 25 on socket 1 00:05:33.709 EAL: Detected lcore 51 as core 26 on socket 1 00:05:33.709 EAL: Detected lcore 52 as core 27 on socket 1 00:05:33.709 EAL: Detected lcore 53 as core 28 on socket 1 00:05:33.709 EAL: Detected lcore 54 as core 29 on socket 1 00:05:33.709 EAL: Detected lcore 55 as core 30 on socket 1 00:05:33.709 EAL: Detected lcore 56 as core 0 on socket 0 00:05:33.709 EAL: Detected lcore 57 as core 1 on socket 0 00:05:33.709 EAL: Detected lcore 58 as core 2 on socket 0 00:05:33.709 EAL: Detected lcore 59 as core 3 on socket 0 00:05:33.709 EAL: Detected lcore 60 as core 4 on socket 0 00:05:33.709 EAL: Detected lcore 61 as core 5 on socket 0 00:05:33.709 EAL: Detected lcore 62 as core 6 on socket 0 00:05:33.709 EAL: Detected lcore 63 as core 8 on socket 0 00:05:33.709 EAL: Detected lcore 64 as core 9 on socket 0 00:05:33.709 EAL: Detected lcore 65 as core 10 on socket 0 00:05:33.709 EAL: Detected lcore 66 as core 11 on socket 0 00:05:33.709 EAL: Detected lcore 67 as core 12 on socket 0 00:05:33.709 EAL: Detected lcore 68 as core 13 on socket 0 00:05:33.709 EAL: Detected lcore 69 as core 14 on socket 0 00:05:33.709 EAL: Detected lcore 70 as core 16 on socket 0 00:05:33.709 EAL: Detected lcore 71 as core 17 on socket 0 00:05:33.709 EAL: Detected lcore 72 as core 18 on socket 0 00:05:33.709 EAL: Detected lcore 73 as core 19 on socket 0 00:05:33.709 EAL: Detected lcore 74 as core 20 on socket 0 00:05:33.709 EAL: Detected lcore 75 as core 21 on socket 0 00:05:33.709 EAL: Detected lcore 76 as core 22 on socket 0 00:05:33.709 EAL: Detected lcore 77 as core 24 on socket 0 00:05:33.709 EAL: Detected lcore 78 as core 25 on socket 0 00:05:33.709 EAL: Detected lcore 79 as core 26 on socket 0 00:05:33.709 EAL: Detected lcore 80 as core 27 on socket 0 00:05:33.709 EAL: Detected lcore 81 as core 28 on socket 0 00:05:33.709 EAL: Detected lcore 82 as core 29 on socket 0 00:05:33.709 EAL: Detected lcore 83 as core 30 on socket 0 00:05:33.709 EAL: Detected lcore 84 as core 0 on socket 1 00:05:33.709 EAL: Detected lcore 85 as core 1 on socket 1 00:05:33.709 EAL: Detected lcore 86 as core 2 on socket 1 00:05:33.709 EAL: Detected lcore 87 as core 3 on socket 1 00:05:33.709 EAL: Detected lcore 88 as core 4 on socket 1 00:05:33.709 EAL: Detected lcore 89 as core 5 on socket 1 00:05:33.709 EAL: Detected lcore 90 as core 6 on socket 1 00:05:33.709 EAL: Detected lcore 91 as core 8 on socket 1 00:05:33.709 EAL: Detected lcore 92 as core 9 on socket 1 00:05:33.709 EAL: Detected lcore 93 as core 10 on socket 1 00:05:33.709 EAL: Detected lcore 94 as core 11 on socket 1 00:05:33.709 EAL: Detected lcore 95 as core 12 on socket 1 00:05:33.709 EAL: Detected lcore 96 as core 13 on socket 1 00:05:33.709 EAL: Detected lcore 97 as core 14 on socket 1 00:05:33.709 EAL: Detected lcore 98 as core 16 on socket 1 00:05:33.709 EAL: Detected lcore 99 as core 17 on socket 1 00:05:33.709 EAL: Detected lcore 100 as core 18 on socket 1 00:05:33.709 EAL: Detected lcore 101 as core 19 on socket 1 00:05:33.709 EAL: Detected lcore 102 as core 20 on socket 1 00:05:33.709 EAL: Detected lcore 103 as core 21 on socket 1 00:05:33.709 EAL: Detected lcore 104 as core 22 on socket 1 00:05:33.709 EAL: Detected lcore 105 as core 24 on socket 1 00:05:33.709 EAL: Detected lcore 106 as core 25 on socket 1 00:05:33.709 EAL: Detected lcore 107 as core 26 on socket 1 00:05:33.709 EAL: Detected lcore 108 as core 27 on socket 1 00:05:33.709 EAL: Detected lcore 109 as core 28 on socket 1 00:05:33.709 EAL: Detected lcore 110 as core 29 on socket 1 00:05:33.709 EAL: Detected lcore 111 as core 30 on socket 1 00:05:33.709 EAL: Maximum logical cores by configuration: 128 00:05:33.709 EAL: Detected CPU lcores: 112 00:05:33.709 EAL: Detected NUMA nodes: 2 00:05:33.709 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:33.709 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:33.709 EAL: Checking presence of .so 'librte_eal.so' 00:05:33.709 EAL: Detected static linkage of DPDK 00:05:33.709 EAL: No shared files mode enabled, IPC will be disabled 00:05:33.709 EAL: Bus pci wants IOVA as 'DC' 00:05:33.709 EAL: Buses did not request a specific IOVA mode. 00:05:33.709 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:33.709 EAL: Selected IOVA mode 'VA' 00:05:33.709 EAL: Probing VFIO support... 00:05:33.709 EAL: IOMMU type 1 (Type 1) is supported 00:05:33.709 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:33.709 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:33.709 EAL: VFIO support initialized 00:05:33.709 EAL: Ask a virtual area of 0x2e000 bytes 00:05:33.709 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:33.709 EAL: Setting up physically contiguous memory... 00:05:33.709 EAL: Setting maximum number of open files to 524288 00:05:33.709 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:33.709 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:33.709 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:33.709 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.709 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:33.709 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:33.709 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.709 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:33.709 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:33.709 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.709 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:33.709 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:33.709 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.709 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:33.709 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:33.709 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.709 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:33.709 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:33.709 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.709 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:33.709 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:33.709 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.709 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:33.709 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:33.709 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.709 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:33.709 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:33.709 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:33.709 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.709 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:33.709 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:33.709 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.709 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:33.709 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:33.709 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.709 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:33.709 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:33.709 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.709 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:33.709 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:33.709 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.709 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:33.709 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:33.709 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.709 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:33.709 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:33.709 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.709 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:33.709 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:33.709 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.709 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:33.709 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:33.709 EAL: Hugepages will be freed exactly as allocated. 00:05:33.709 EAL: No shared files mode enabled, IPC is disabled 00:05:33.709 EAL: No shared files mode enabled, IPC is disabled 00:05:33.709 EAL: TSC frequency is ~2500000 KHz 00:05:33.709 EAL: Main lcore 0 is ready (tid=7f942cb56a00;cpuset=[0]) 00:05:33.709 EAL: Trying to obtain current memory policy. 00:05:33.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.709 EAL: Restoring previous memory policy: 0 00:05:33.709 EAL: request: mp_malloc_sync 00:05:33.709 EAL: No shared files mode enabled, IPC is disabled 00:05:33.709 EAL: Heap on socket 0 was expanded by 2MB 00:05:33.709 EAL: No shared files mode enabled, IPC is disabled 00:05:33.709 EAL: Mem event callback 'spdk:(nil)' registered 00:05:33.709 00:05:33.709 00:05:33.709 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.709 http://cunit.sourceforge.net/ 00:05:33.709 00:05:33.709 00:05:33.710 Suite: components_suite 00:05:33.710 Test: vtophys_malloc_test ...passed 00:05:33.710 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:33.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.710 EAL: Restoring previous memory policy: 4 00:05:33.710 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.710 EAL: request: mp_malloc_sync 00:05:33.710 EAL: No shared files mode enabled, IPC is disabled 00:05:33.710 EAL: Heap on socket 0 was expanded by 4MB 00:05:33.710 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.710 EAL: request: mp_malloc_sync 00:05:33.710 EAL: No shared files mode enabled, IPC is disabled 00:05:33.710 EAL: Heap on socket 0 was shrunk by 4MB 00:05:33.710 EAL: Trying to obtain current memory policy. 00:05:33.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.710 EAL: Restoring previous memory policy: 4 00:05:33.710 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.710 EAL: request: mp_malloc_sync 00:05:33.710 EAL: No shared files mode enabled, IPC is disabled 00:05:33.710 EAL: Heap on socket 0 was expanded by 6MB 00:05:33.710 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.710 EAL: request: mp_malloc_sync 00:05:33.710 EAL: No shared files mode enabled, IPC is disabled 00:05:33.710 EAL: Heap on socket 0 was shrunk by 6MB 00:05:33.710 EAL: Trying to obtain current memory policy. 00:05:33.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.710 EAL: Restoring previous memory policy: 4 00:05:33.710 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.710 EAL: request: mp_malloc_sync 00:05:33.710 EAL: No shared files mode enabled, IPC is disabled 00:05:33.710 EAL: Heap on socket 0 was expanded by 10MB 00:05:33.710 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.710 EAL: request: mp_malloc_sync 00:05:33.710 EAL: No shared files mode enabled, IPC is disabled 00:05:33.710 EAL: Heap on socket 0 was shrunk by 10MB 00:05:33.710 EAL: Trying to obtain current memory policy. 00:05:33.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.710 EAL: Restoring previous memory policy: 4 00:05:33.710 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.710 EAL: request: mp_malloc_sync 00:05:33.710 EAL: No shared files mode enabled, IPC is disabled 00:05:33.710 EAL: Heap on socket 0 was expanded by 18MB 00:05:33.710 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.710 EAL: request: mp_malloc_sync 00:05:33.710 EAL: No shared files mode enabled, IPC is disabled 00:05:33.710 EAL: Heap on socket 0 was shrunk by 18MB 00:05:33.710 EAL: Trying to obtain current memory policy. 00:05:33.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.710 EAL: Restoring previous memory policy: 4 00:05:33.710 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.710 EAL: request: mp_malloc_sync 00:05:33.710 EAL: No shared files mode enabled, IPC is disabled 00:05:33.710 EAL: Heap on socket 0 was expanded by 34MB 00:05:33.710 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.710 EAL: request: mp_malloc_sync 00:05:33.710 EAL: No shared files mode enabled, IPC is disabled 00:05:33.710 EAL: Heap on socket 0 was shrunk by 34MB 00:05:33.710 EAL: Trying to obtain current memory policy. 00:05:33.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.969 EAL: Restoring previous memory policy: 4 00:05:33.969 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.969 EAL: request: mp_malloc_sync 00:05:33.969 EAL: No shared files mode enabled, IPC is disabled 00:05:33.969 EAL: Heap on socket 0 was expanded by 66MB 00:05:33.969 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.969 EAL: request: mp_malloc_sync 00:05:33.969 EAL: No shared files mode enabled, IPC is disabled 00:05:33.969 EAL: Heap on socket 0 was shrunk by 66MB 00:05:33.969 EAL: Trying to obtain current memory policy. 00:05:33.969 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.969 EAL: Restoring previous memory policy: 4 00:05:33.969 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.969 EAL: request: mp_malloc_sync 00:05:33.969 EAL: No shared files mode enabled, IPC is disabled 00:05:33.969 EAL: Heap on socket 0 was expanded by 130MB 00:05:33.969 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.969 EAL: request: mp_malloc_sync 00:05:33.969 EAL: No shared files mode enabled, IPC is disabled 00:05:33.969 EAL: Heap on socket 0 was shrunk by 130MB 00:05:33.969 EAL: Trying to obtain current memory policy. 00:05:33.969 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.969 EAL: Restoring previous memory policy: 4 00:05:33.969 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.969 EAL: request: mp_malloc_sync 00:05:33.969 EAL: No shared files mode enabled, IPC is disabled 00:05:33.969 EAL: Heap on socket 0 was expanded by 258MB 00:05:33.969 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.969 EAL: request: mp_malloc_sync 00:05:33.969 EAL: No shared files mode enabled, IPC is disabled 00:05:33.969 EAL: Heap on socket 0 was shrunk by 258MB 00:05:33.969 EAL: Trying to obtain current memory policy. 00:05:33.969 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.228 EAL: Restoring previous memory policy: 4 00:05:34.228 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.228 EAL: request: mp_malloc_sync 00:05:34.228 EAL: No shared files mode enabled, IPC is disabled 00:05:34.228 EAL: Heap on socket 0 was expanded by 514MB 00:05:34.228 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.228 EAL: request: mp_malloc_sync 00:05:34.228 EAL: No shared files mode enabled, IPC is disabled 00:05:34.228 EAL: Heap on socket 0 was shrunk by 514MB 00:05:34.228 EAL: Trying to obtain current memory policy. 00:05:34.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.487 EAL: Restoring previous memory policy: 4 00:05:34.487 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.487 EAL: request: mp_malloc_sync 00:05:34.487 EAL: No shared files mode enabled, IPC is disabled 00:05:34.487 EAL: Heap on socket 0 was expanded by 1026MB 00:05:34.746 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.746 EAL: request: mp_malloc_sync 00:05:34.746 EAL: No shared files mode enabled, IPC is disabled 00:05:34.746 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:34.746 passed 00:05:34.746 00:05:34.746 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.746 suites 1 1 n/a 0 0 00:05:34.746 tests 2 2 2 0 0 00:05:34.746 asserts 497 497 497 0 n/a 00:05:34.746 00:05:34.746 Elapsed time = 0.960 seconds 00:05:34.746 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.746 EAL: request: mp_malloc_sync 00:05:34.746 EAL: No shared files mode enabled, IPC is disabled 00:05:34.746 EAL: Heap on socket 0 was shrunk by 2MB 00:05:34.746 EAL: No shared files mode enabled, IPC is disabled 00:05:34.746 EAL: No shared files mode enabled, IPC is disabled 00:05:34.746 EAL: No shared files mode enabled, IPC is disabled 00:05:34.746 00:05:34.746 real 0m1.083s 00:05:34.746 user 0m0.629s 00:05:34.746 sys 0m0.430s 00:05:34.746 12:25:40 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.746 12:25:40 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:34.746 ************************************ 00:05:34.746 END TEST env_vtophys 00:05:34.746 ************************************ 00:05:34.746 12:25:40 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:34.746 12:25:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.746 12:25:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.746 12:25:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.005 ************************************ 00:05:35.005 START TEST env_pci 00:05:35.005 ************************************ 00:05:35.005 12:25:40 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:35.005 00:05:35.005 00:05:35.005 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.005 http://cunit.sourceforge.net/ 00:05:35.005 00:05:35.005 00:05:35.005 Suite: pci 00:05:35.005 Test: pci_hook ...[2024-12-16 12:25:40.331971] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1118:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 968870 has claimed it 00:05:35.005 EAL: Cannot find device (10000:00:01.0) 00:05:35.005 EAL: Failed to attach device on primary process 00:05:35.005 passed 00:05:35.005 00:05:35.005 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.005 suites 1 1 n/a 0 0 00:05:35.005 tests 1 1 1 0 0 00:05:35.005 asserts 25 25 25 0 n/a 00:05:35.005 00:05:35.005 Elapsed time = 0.035 seconds 00:05:35.005 00:05:35.005 real 0m0.055s 00:05:35.005 user 0m0.022s 00:05:35.005 sys 0m0.033s 00:05:35.005 12:25:40 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.005 12:25:40 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:35.005 ************************************ 00:05:35.005 END TEST env_pci 00:05:35.005 ************************************ 00:05:35.005 12:25:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:35.005 12:25:40 env -- env/env.sh@15 -- # uname 00:05:35.005 12:25:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:35.005 12:25:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:35.005 12:25:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:35.005 12:25:40 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:35.005 12:25:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.005 12:25:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.005 ************************************ 00:05:35.005 START TEST env_dpdk_post_init 00:05:35.005 ************************************ 00:05:35.005 12:25:40 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:35.005 EAL: Detected CPU lcores: 112 00:05:35.005 EAL: Detected NUMA nodes: 2 00:05:35.005 EAL: Detected static linkage of DPDK 00:05:35.005 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:35.005 EAL: Selected IOVA mode 'VA' 00:05:35.005 EAL: VFIO support initialized 00:05:35.005 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:35.264 EAL: Using IOMMU type 1 (Type 1) 00:05:35.832 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:40.027 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:40.027 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:05:40.027 Starting DPDK initialization... 00:05:40.027 Starting SPDK post initialization... 00:05:40.027 SPDK NVMe probe 00:05:40.027 Attaching to 0000:d8:00.0 00:05:40.027 Attached to 0000:d8:00.0 00:05:40.027 Cleaning up... 00:05:40.027 00:05:40.027 real 0m4.752s 00:05:40.027 user 0m3.374s 00:05:40.027 sys 0m0.625s 00:05:40.027 12:25:45 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.027 12:25:45 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.027 ************************************ 00:05:40.027 END TEST env_dpdk_post_init 00:05:40.027 ************************************ 00:05:40.027 12:25:45 env -- env/env.sh@26 -- # uname 00:05:40.027 12:25:45 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:40.027 12:25:45 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:40.027 12:25:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.027 12:25:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.027 12:25:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.027 ************************************ 00:05:40.027 START TEST env_mem_callbacks 00:05:40.027 ************************************ 00:05:40.027 12:25:45 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:40.027 EAL: Detected CPU lcores: 112 00:05:40.027 EAL: Detected NUMA nodes: 2 00:05:40.027 EAL: Detected static linkage of DPDK 00:05:40.027 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.027 EAL: Selected IOVA mode 'VA' 00:05:40.027 EAL: VFIO support initialized 00:05:40.027 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:40.027 00:05:40.027 00:05:40.027 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.027 http://cunit.sourceforge.net/ 00:05:40.027 00:05:40.027 00:05:40.027 Suite: memory 00:05:40.027 Test: test ... 00:05:40.027 register 0x200000200000 2097152 00:05:40.027 malloc 3145728 00:05:40.027 register 0x200000400000 4194304 00:05:40.027 buf 0x200000500000 len 3145728 PASSED 00:05:40.027 malloc 64 00:05:40.027 buf 0x2000004fff40 len 64 PASSED 00:05:40.027 malloc 4194304 00:05:40.027 register 0x200000800000 6291456 00:05:40.027 buf 0x200000a00000 len 4194304 PASSED 00:05:40.027 free 0x200000500000 3145728 00:05:40.027 free 0x2000004fff40 64 00:05:40.027 unregister 0x200000400000 4194304 PASSED 00:05:40.027 free 0x200000a00000 4194304 00:05:40.027 unregister 0x200000800000 6291456 PASSED 00:05:40.027 malloc 8388608 00:05:40.027 register 0x200000400000 10485760 00:05:40.027 buf 0x200000600000 len 8388608 PASSED 00:05:40.027 free 0x200000600000 8388608 00:05:40.027 unregister 0x200000400000 10485760 PASSED 00:05:40.027 passed 00:05:40.027 00:05:40.027 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.027 suites 1 1 n/a 0 0 00:05:40.027 tests 1 1 1 0 0 00:05:40.027 asserts 15 15 15 0 n/a 00:05:40.027 00:05:40.027 Elapsed time = 0.005 seconds 00:05:40.027 00:05:40.027 real 0m0.066s 00:05:40.027 user 0m0.020s 00:05:40.027 sys 0m0.046s 00:05:40.027 12:25:45 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.027 12:25:45 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:40.027 ************************************ 00:05:40.027 END TEST env_mem_callbacks 00:05:40.027 ************************************ 00:05:40.027 00:05:40.027 real 0m6.655s 00:05:40.027 user 0m4.388s 00:05:40.027 sys 0m1.532s 00:05:40.027 12:25:45 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.027 12:25:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.027 ************************************ 00:05:40.027 END TEST env 00:05:40.027 ************************************ 00:05:40.027 12:25:45 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:40.027 12:25:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.027 12:25:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.027 12:25:45 -- common/autotest_common.sh@10 -- # set +x 00:05:40.027 ************************************ 00:05:40.027 START TEST rpc 00:05:40.027 ************************************ 00:05:40.027 12:25:45 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:40.027 * Looking for test storage... 00:05:40.027 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:40.027 12:25:45 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:40.287 12:25:45 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.287 12:25:45 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:40.287 12:25:45 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.287 12:25:45 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.287 12:25:45 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.287 12:25:45 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.287 12:25:45 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.287 12:25:45 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.287 12:25:45 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.287 12:25:45 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.287 12:25:45 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.287 12:25:45 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.287 12:25:45 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.287 12:25:45 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.287 12:25:45 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:40.287 12:25:45 rpc -- scripts/common.sh@345 -- # : 1 00:05:40.287 12:25:45 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.288 12:25:45 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.288 12:25:45 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:40.288 12:25:45 rpc -- scripts/common.sh@353 -- # local d=1 00:05:40.288 12:25:45 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.288 12:25:45 rpc -- scripts/common.sh@355 -- # echo 1 00:05:40.288 12:25:45 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.288 12:25:45 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:40.288 12:25:45 rpc -- scripts/common.sh@353 -- # local d=2 00:05:40.288 12:25:45 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.288 12:25:45 rpc -- scripts/common.sh@355 -- # echo 2 00:05:40.288 12:25:45 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.288 12:25:45 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.288 12:25:45 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.288 12:25:45 rpc -- scripts/common.sh@368 -- # return 0 00:05:40.288 12:25:45 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.288 12:25:45 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.288 --rc genhtml_branch_coverage=1 00:05:40.288 --rc genhtml_function_coverage=1 00:05:40.288 --rc genhtml_legend=1 00:05:40.288 --rc geninfo_all_blocks=1 00:05:40.288 --rc geninfo_unexecuted_blocks=1 00:05:40.288 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.288 ' 00:05:40.288 12:25:45 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.288 --rc genhtml_branch_coverage=1 00:05:40.288 --rc genhtml_function_coverage=1 00:05:40.288 --rc genhtml_legend=1 00:05:40.288 --rc geninfo_all_blocks=1 00:05:40.288 --rc geninfo_unexecuted_blocks=1 00:05:40.288 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.288 ' 00:05:40.288 12:25:45 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.288 --rc genhtml_branch_coverage=1 00:05:40.288 --rc genhtml_function_coverage=1 00:05:40.288 --rc genhtml_legend=1 00:05:40.288 --rc geninfo_all_blocks=1 00:05:40.288 --rc geninfo_unexecuted_blocks=1 00:05:40.288 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.288 ' 00:05:40.288 12:25:45 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.288 --rc genhtml_branch_coverage=1 00:05:40.288 --rc genhtml_function_coverage=1 00:05:40.288 --rc genhtml_legend=1 00:05:40.288 --rc geninfo_all_blocks=1 00:05:40.288 --rc geninfo_unexecuted_blocks=1 00:05:40.288 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:40.288 ' 00:05:40.288 12:25:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=970038 00:05:40.288 12:25:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.288 12:25:45 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:40.288 12:25:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 970038 00:05:40.288 12:25:45 rpc -- common/autotest_common.sh@835 -- # '[' -z 970038 ']' 00:05:40.288 12:25:45 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.288 12:25:45 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.288 12:25:45 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.288 12:25:45 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.288 12:25:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.288 [2024-12-16 12:25:45.702727] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:40.288 [2024-12-16 12:25:45.702789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid970038 ] 00:05:40.288 [2024-12-16 12:25:45.772077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.288 [2024-12-16 12:25:45.810521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:40.288 [2024-12-16 12:25:45.810562] app.c: 616:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 970038' to capture a snapshot of events at runtime. 00:05:40.288 [2024-12-16 12:25:45.810572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:40.288 [2024-12-16 12:25:45.810580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:40.288 [2024-12-16 12:25:45.810587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid970038 for offline analysis/debug. 00:05:40.288 [2024-12-16 12:25:45.811208] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.548 12:25:46 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.548 12:25:46 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.548 12:25:46 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:40.548 12:25:46 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:40.548 12:25:46 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:40.548 12:25:46 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:40.548 12:25:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.548 12:25:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.548 12:25:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.548 ************************************ 00:05:40.548 START TEST rpc_integrity 00:05:40.548 ************************************ 00:05:40.548 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:40.548 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:40.548 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.548 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.548 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.548 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:40.548 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:40.548 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:40.548 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:40.548 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.548 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.807 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.807 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:40.807 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:40.807 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.807 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.807 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.807 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:40.807 { 00:05:40.807 "name": "Malloc0", 00:05:40.807 "aliases": [ 00:05:40.807 "4015426a-ac89-4089-b928-ee646f824a37" 00:05:40.807 ], 00:05:40.807 "product_name": "Malloc disk", 00:05:40.807 "block_size": 512, 00:05:40.807 "num_blocks": 16384, 00:05:40.807 "uuid": "4015426a-ac89-4089-b928-ee646f824a37", 00:05:40.807 "assigned_rate_limits": { 00:05:40.807 "rw_ios_per_sec": 0, 00:05:40.807 "rw_mbytes_per_sec": 0, 00:05:40.807 "r_mbytes_per_sec": 0, 00:05:40.807 "w_mbytes_per_sec": 0 00:05:40.807 }, 00:05:40.807 "claimed": false, 00:05:40.807 "zoned": false, 00:05:40.807 "supported_io_types": { 00:05:40.807 "read": true, 00:05:40.807 "write": true, 00:05:40.807 "unmap": true, 00:05:40.807 "flush": true, 00:05:40.807 "reset": true, 00:05:40.807 "nvme_admin": false, 00:05:40.807 "nvme_io": false, 00:05:40.807 "nvme_io_md": false, 00:05:40.807 "write_zeroes": true, 00:05:40.807 "zcopy": true, 00:05:40.807 "get_zone_info": false, 00:05:40.807 "zone_management": false, 00:05:40.807 "zone_append": false, 00:05:40.807 "compare": false, 00:05:40.807 "compare_and_write": false, 00:05:40.807 "abort": true, 00:05:40.807 "seek_hole": false, 00:05:40.807 "seek_data": false, 00:05:40.807 "copy": true, 00:05:40.807 "nvme_iov_md": false 00:05:40.807 }, 00:05:40.807 "memory_domains": [ 00:05:40.807 { 00:05:40.807 "dma_device_id": "system", 00:05:40.807 "dma_device_type": 1 00:05:40.807 }, 00:05:40.807 { 00:05:40.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.807 "dma_device_type": 2 00:05:40.807 } 00:05:40.807 ], 00:05:40.807 "driver_specific": {} 00:05:40.807 } 00:05:40.807 ]' 00:05:40.807 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:40.807 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:40.807 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:40.807 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.807 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.807 [2024-12-16 12:25:46.182924] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:40.807 [2024-12-16 12:25:46.182959] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:40.807 [2024-12-16 12:25:46.182981] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x58e5d80 00:05:40.807 [2024-12-16 12:25:46.182991] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:40.807 [2024-12-16 12:25:46.183865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:40.807 [2024-12-16 12:25:46.183889] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:40.807 Passthru0 00:05:40.807 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.807 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:40.807 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.807 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.807 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.807 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:40.807 { 00:05:40.807 "name": "Malloc0", 00:05:40.807 "aliases": [ 00:05:40.807 "4015426a-ac89-4089-b928-ee646f824a37" 00:05:40.807 ], 00:05:40.807 "product_name": "Malloc disk", 00:05:40.807 "block_size": 512, 00:05:40.807 "num_blocks": 16384, 00:05:40.807 "uuid": "4015426a-ac89-4089-b928-ee646f824a37", 00:05:40.807 "assigned_rate_limits": { 00:05:40.807 "rw_ios_per_sec": 0, 00:05:40.807 "rw_mbytes_per_sec": 0, 00:05:40.807 "r_mbytes_per_sec": 0, 00:05:40.807 "w_mbytes_per_sec": 0 00:05:40.807 }, 00:05:40.808 "claimed": true, 00:05:40.808 "claim_type": "exclusive_write", 00:05:40.808 "zoned": false, 00:05:40.808 "supported_io_types": { 00:05:40.808 "read": true, 00:05:40.808 "write": true, 00:05:40.808 "unmap": true, 00:05:40.808 "flush": true, 00:05:40.808 "reset": true, 00:05:40.808 "nvme_admin": false, 00:05:40.808 "nvme_io": false, 00:05:40.808 "nvme_io_md": false, 00:05:40.808 "write_zeroes": true, 00:05:40.808 "zcopy": true, 00:05:40.808 "get_zone_info": false, 00:05:40.808 "zone_management": false, 00:05:40.808 "zone_append": false, 00:05:40.808 "compare": false, 00:05:40.808 "compare_and_write": false, 00:05:40.808 "abort": true, 00:05:40.808 "seek_hole": false, 00:05:40.808 "seek_data": false, 00:05:40.808 "copy": true, 00:05:40.808 "nvme_iov_md": false 00:05:40.808 }, 00:05:40.808 "memory_domains": [ 00:05:40.808 { 00:05:40.808 "dma_device_id": "system", 00:05:40.808 "dma_device_type": 1 00:05:40.808 }, 00:05:40.808 { 00:05:40.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.808 "dma_device_type": 2 00:05:40.808 } 00:05:40.808 ], 00:05:40.808 "driver_specific": {} 00:05:40.808 }, 00:05:40.808 { 00:05:40.808 "name": "Passthru0", 00:05:40.808 "aliases": [ 00:05:40.808 "a2aea0eb-c0e0-5fde-940e-1ef3e0ffbd6f" 00:05:40.808 ], 00:05:40.808 "product_name": "passthru", 00:05:40.808 "block_size": 512, 00:05:40.808 "num_blocks": 16384, 00:05:40.808 "uuid": "a2aea0eb-c0e0-5fde-940e-1ef3e0ffbd6f", 00:05:40.808 "assigned_rate_limits": { 00:05:40.808 "rw_ios_per_sec": 0, 00:05:40.808 "rw_mbytes_per_sec": 0, 00:05:40.808 "r_mbytes_per_sec": 0, 00:05:40.808 "w_mbytes_per_sec": 0 00:05:40.808 }, 00:05:40.808 "claimed": false, 00:05:40.808 "zoned": false, 00:05:40.808 "supported_io_types": { 00:05:40.808 "read": true, 00:05:40.808 "write": true, 00:05:40.808 "unmap": true, 00:05:40.808 "flush": true, 00:05:40.808 "reset": true, 00:05:40.808 "nvme_admin": false, 00:05:40.808 "nvme_io": false, 00:05:40.808 "nvme_io_md": false, 00:05:40.808 "write_zeroes": true, 00:05:40.808 "zcopy": true, 00:05:40.808 "get_zone_info": false, 00:05:40.808 "zone_management": false, 00:05:40.808 "zone_append": false, 00:05:40.808 "compare": false, 00:05:40.808 "compare_and_write": false, 00:05:40.808 "abort": true, 00:05:40.808 "seek_hole": false, 00:05:40.808 "seek_data": false, 00:05:40.808 "copy": true, 00:05:40.808 "nvme_iov_md": false 00:05:40.808 }, 00:05:40.808 "memory_domains": [ 00:05:40.808 { 00:05:40.808 "dma_device_id": "system", 00:05:40.808 "dma_device_type": 1 00:05:40.808 }, 00:05:40.808 { 00:05:40.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.808 "dma_device_type": 2 00:05:40.808 } 00:05:40.808 ], 00:05:40.808 "driver_specific": { 00:05:40.808 "passthru": { 00:05:40.808 "name": "Passthru0", 00:05:40.808 "base_bdev_name": "Malloc0" 00:05:40.808 } 00:05:40.808 } 00:05:40.808 } 00:05:40.808 ]' 00:05:40.808 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:40.808 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:40.808 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:40.808 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.808 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.808 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.808 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:40.808 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.808 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.808 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.808 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:40.808 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.808 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.808 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.808 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:40.808 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:40.808 12:25:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:40.808 00:05:40.808 real 0m0.277s 00:05:40.808 user 0m0.172s 00:05:40.808 sys 0m0.052s 00:05:40.808 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.808 12:25:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.808 ************************************ 00:05:40.808 END TEST rpc_integrity 00:05:40.808 ************************************ 00:05:41.067 12:25:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:41.067 12:25:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.067 12:25:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.067 12:25:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.067 ************************************ 00:05:41.067 START TEST rpc_plugins 00:05:41.067 ************************************ 00:05:41.067 12:25:46 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:41.067 12:25:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:41.067 12:25:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.067 12:25:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.067 12:25:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.067 12:25:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:41.067 12:25:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:41.067 12:25:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.067 12:25:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.067 12:25:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.067 12:25:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:41.067 { 00:05:41.067 "name": "Malloc1", 00:05:41.067 "aliases": [ 00:05:41.067 "9d7558a4-7bd6-4d62-97d4-a70980aa3a15" 00:05:41.067 ], 00:05:41.067 "product_name": "Malloc disk", 00:05:41.067 "block_size": 4096, 00:05:41.067 "num_blocks": 256, 00:05:41.067 "uuid": "9d7558a4-7bd6-4d62-97d4-a70980aa3a15", 00:05:41.067 "assigned_rate_limits": { 00:05:41.067 "rw_ios_per_sec": 0, 00:05:41.067 "rw_mbytes_per_sec": 0, 00:05:41.067 "r_mbytes_per_sec": 0, 00:05:41.067 "w_mbytes_per_sec": 0 00:05:41.067 }, 00:05:41.067 "claimed": false, 00:05:41.067 "zoned": false, 00:05:41.067 "supported_io_types": { 00:05:41.067 "read": true, 00:05:41.067 "write": true, 00:05:41.067 "unmap": true, 00:05:41.067 "flush": true, 00:05:41.067 "reset": true, 00:05:41.067 "nvme_admin": false, 00:05:41.067 "nvme_io": false, 00:05:41.067 "nvme_io_md": false, 00:05:41.067 "write_zeroes": true, 00:05:41.067 "zcopy": true, 00:05:41.067 "get_zone_info": false, 00:05:41.067 "zone_management": false, 00:05:41.067 "zone_append": false, 00:05:41.067 "compare": false, 00:05:41.067 "compare_and_write": false, 00:05:41.067 "abort": true, 00:05:41.067 "seek_hole": false, 00:05:41.067 "seek_data": false, 00:05:41.067 "copy": true, 00:05:41.067 "nvme_iov_md": false 00:05:41.067 }, 00:05:41.067 "memory_domains": [ 00:05:41.067 { 00:05:41.067 "dma_device_id": "system", 00:05:41.067 "dma_device_type": 1 00:05:41.067 }, 00:05:41.067 { 00:05:41.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.067 "dma_device_type": 2 00:05:41.067 } 00:05:41.067 ], 00:05:41.067 "driver_specific": {} 00:05:41.067 } 00:05:41.067 ]' 00:05:41.067 12:25:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:41.067 12:25:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:41.067 12:25:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:41.067 12:25:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.067 12:25:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.067 12:25:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.067 12:25:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:41.067 12:25:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.067 12:25:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.067 12:25:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.067 12:25:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:41.067 12:25:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:41.067 12:25:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:41.067 00:05:41.067 real 0m0.141s 00:05:41.067 user 0m0.083s 00:05:41.067 sys 0m0.030s 00:05:41.067 12:25:46 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.067 12:25:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.067 ************************************ 00:05:41.067 END TEST rpc_plugins 00:05:41.067 ************************************ 00:05:41.067 12:25:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:41.067 12:25:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.067 12:25:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.067 12:25:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.326 ************************************ 00:05:41.326 START TEST rpc_trace_cmd_test 00:05:41.326 ************************************ 00:05:41.326 12:25:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:41.326 12:25:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:41.326 12:25:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:41.326 12:25:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.326 12:25:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:41.326 12:25:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.326 12:25:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:41.326 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid970038", 00:05:41.326 "tpoint_group_mask": "0x8", 00:05:41.326 "iscsi_conn": { 00:05:41.326 "mask": "0x2", 00:05:41.326 "tpoint_mask": "0x0" 00:05:41.326 }, 00:05:41.326 "scsi": { 00:05:41.326 "mask": "0x4", 00:05:41.326 "tpoint_mask": "0x0" 00:05:41.326 }, 00:05:41.326 "bdev": { 00:05:41.326 "mask": "0x8", 00:05:41.326 "tpoint_mask": "0xffffffffffffffff" 00:05:41.326 }, 00:05:41.326 "nvmf_rdma": { 00:05:41.326 "mask": "0x10", 00:05:41.326 "tpoint_mask": "0x0" 00:05:41.326 }, 00:05:41.326 "nvmf_tcp": { 00:05:41.326 "mask": "0x20", 00:05:41.326 "tpoint_mask": "0x0" 00:05:41.326 }, 00:05:41.326 "ftl": { 00:05:41.326 "mask": "0x40", 00:05:41.326 "tpoint_mask": "0x0" 00:05:41.326 }, 00:05:41.326 "blobfs": { 00:05:41.326 "mask": "0x80", 00:05:41.326 "tpoint_mask": "0x0" 00:05:41.326 }, 00:05:41.326 "dsa": { 00:05:41.326 "mask": "0x200", 00:05:41.326 "tpoint_mask": "0x0" 00:05:41.326 }, 00:05:41.326 "thread": { 00:05:41.326 "mask": "0x400", 00:05:41.326 "tpoint_mask": "0x0" 00:05:41.326 }, 00:05:41.326 "nvme_pcie": { 00:05:41.326 "mask": "0x800", 00:05:41.326 "tpoint_mask": "0x0" 00:05:41.326 }, 00:05:41.326 "iaa": { 00:05:41.326 "mask": "0x1000", 00:05:41.326 "tpoint_mask": "0x0" 00:05:41.326 }, 00:05:41.326 "nvme_tcp": { 00:05:41.326 "mask": "0x2000", 00:05:41.326 "tpoint_mask": "0x0" 00:05:41.326 }, 00:05:41.326 "bdev_nvme": { 00:05:41.326 "mask": "0x4000", 00:05:41.326 "tpoint_mask": "0x0" 00:05:41.326 }, 00:05:41.326 "sock": { 00:05:41.326 "mask": "0x8000", 00:05:41.326 "tpoint_mask": "0x0" 00:05:41.326 }, 00:05:41.326 "blob": { 00:05:41.326 "mask": "0x10000", 00:05:41.326 "tpoint_mask": "0x0" 00:05:41.326 }, 00:05:41.326 "bdev_raid": { 00:05:41.326 "mask": "0x20000", 00:05:41.326 "tpoint_mask": "0x0" 00:05:41.326 }, 00:05:41.326 "scheduler": { 00:05:41.326 "mask": "0x40000", 00:05:41.326 "tpoint_mask": "0x0" 00:05:41.326 } 00:05:41.326 }' 00:05:41.326 12:25:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:41.326 12:25:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:41.326 12:25:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:41.326 12:25:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:41.326 12:25:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:41.326 12:25:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:41.326 12:25:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:41.326 12:25:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:41.326 12:25:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:41.326 12:25:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:41.326 00:05:41.326 real 0m0.230s 00:05:41.326 user 0m0.181s 00:05:41.326 sys 0m0.044s 00:05:41.326 12:25:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.326 12:25:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:41.326 ************************************ 00:05:41.326 END TEST rpc_trace_cmd_test 00:05:41.326 ************************************ 00:05:41.586 12:25:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:41.586 12:25:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:41.586 12:25:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:41.586 12:25:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.586 12:25:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.586 12:25:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.586 ************************************ 00:05:41.586 START TEST rpc_daemon_integrity 00:05:41.586 ************************************ 00:05:41.586 12:25:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:41.586 12:25:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:41.586 12:25:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.586 12:25:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.586 12:25:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.586 12:25:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:41.586 12:25:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:41.586 12:25:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:41.586 12:25:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.586 12:25:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.586 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.586 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.586 12:25:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:41.586 12:25:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:41.586 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.586 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.586 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.586 12:25:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:41.586 { 00:05:41.586 "name": "Malloc2", 00:05:41.586 "aliases": [ 00:05:41.586 "83c4d6a0-cb28-4027-b367-a09049d13fc6" 00:05:41.586 ], 00:05:41.586 "product_name": "Malloc disk", 00:05:41.586 "block_size": 512, 00:05:41.586 "num_blocks": 16384, 00:05:41.586 "uuid": "83c4d6a0-cb28-4027-b367-a09049d13fc6", 00:05:41.586 "assigned_rate_limits": { 00:05:41.586 "rw_ios_per_sec": 0, 00:05:41.586 "rw_mbytes_per_sec": 0, 00:05:41.586 "r_mbytes_per_sec": 0, 00:05:41.586 "w_mbytes_per_sec": 0 00:05:41.586 }, 00:05:41.586 "claimed": false, 00:05:41.586 "zoned": false, 00:05:41.586 "supported_io_types": { 00:05:41.586 "read": true, 00:05:41.586 "write": true, 00:05:41.586 "unmap": true, 00:05:41.586 "flush": true, 00:05:41.586 "reset": true, 00:05:41.586 "nvme_admin": false, 00:05:41.586 "nvme_io": false, 00:05:41.586 "nvme_io_md": false, 00:05:41.586 "write_zeroes": true, 00:05:41.586 "zcopy": true, 00:05:41.586 "get_zone_info": false, 00:05:41.586 "zone_management": false, 00:05:41.586 "zone_append": false, 00:05:41.586 "compare": false, 00:05:41.586 "compare_and_write": false, 00:05:41.586 "abort": true, 00:05:41.586 "seek_hole": false, 00:05:41.586 "seek_data": false, 00:05:41.586 "copy": true, 00:05:41.586 "nvme_iov_md": false 00:05:41.586 }, 00:05:41.586 "memory_domains": [ 00:05:41.586 { 00:05:41.586 "dma_device_id": "system", 00:05:41.586 "dma_device_type": 1 00:05:41.586 }, 00:05:41.586 { 00:05:41.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.586 "dma_device_type": 2 00:05:41.586 } 00:05:41.586 ], 00:05:41.586 "driver_specific": {} 00:05:41.586 } 00:05:41.586 ]' 00:05:41.586 12:25:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:41.586 12:25:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:41.586 12:25:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:41.586 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.586 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.586 [2024-12-16 12:25:47.081226] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:41.586 [2024-12-16 12:25:47.081259] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:41.586 [2024-12-16 12:25:47.081280] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x57b87f0 00:05:41.587 [2024-12-16 12:25:47.081290] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:41.587 [2024-12-16 12:25:47.082050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:41.587 [2024-12-16 12:25:47.082074] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:41.587 Passthru0 00:05:41.587 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.587 12:25:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:41.587 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.587 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.587 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.587 12:25:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:41.587 { 00:05:41.587 "name": "Malloc2", 00:05:41.587 "aliases": [ 00:05:41.587 "83c4d6a0-cb28-4027-b367-a09049d13fc6" 00:05:41.587 ], 00:05:41.587 "product_name": "Malloc disk", 00:05:41.587 "block_size": 512, 00:05:41.587 "num_blocks": 16384, 00:05:41.587 "uuid": "83c4d6a0-cb28-4027-b367-a09049d13fc6", 00:05:41.587 "assigned_rate_limits": { 00:05:41.587 "rw_ios_per_sec": 0, 00:05:41.587 "rw_mbytes_per_sec": 0, 00:05:41.587 "r_mbytes_per_sec": 0, 00:05:41.587 "w_mbytes_per_sec": 0 00:05:41.587 }, 00:05:41.587 "claimed": true, 00:05:41.587 "claim_type": "exclusive_write", 00:05:41.587 "zoned": false, 00:05:41.587 "supported_io_types": { 00:05:41.587 "read": true, 00:05:41.587 "write": true, 00:05:41.587 "unmap": true, 00:05:41.587 "flush": true, 00:05:41.587 "reset": true, 00:05:41.587 "nvme_admin": false, 00:05:41.587 "nvme_io": false, 00:05:41.587 "nvme_io_md": false, 00:05:41.587 "write_zeroes": true, 00:05:41.587 "zcopy": true, 00:05:41.587 "get_zone_info": false, 00:05:41.587 "zone_management": false, 00:05:41.587 "zone_append": false, 00:05:41.587 "compare": false, 00:05:41.587 "compare_and_write": false, 00:05:41.587 "abort": true, 00:05:41.587 "seek_hole": false, 00:05:41.587 "seek_data": false, 00:05:41.587 "copy": true, 00:05:41.587 "nvme_iov_md": false 00:05:41.587 }, 00:05:41.587 "memory_domains": [ 00:05:41.587 { 00:05:41.587 "dma_device_id": "system", 00:05:41.587 "dma_device_type": 1 00:05:41.587 }, 00:05:41.587 { 00:05:41.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.587 "dma_device_type": 2 00:05:41.587 } 00:05:41.587 ], 00:05:41.587 "driver_specific": {} 00:05:41.587 }, 00:05:41.587 { 00:05:41.587 "name": "Passthru0", 00:05:41.587 "aliases": [ 00:05:41.587 "09a82842-00ee-5067-8326-de68372b734f" 00:05:41.587 ], 00:05:41.587 "product_name": "passthru", 00:05:41.587 "block_size": 512, 00:05:41.587 "num_blocks": 16384, 00:05:41.587 "uuid": "09a82842-00ee-5067-8326-de68372b734f", 00:05:41.587 "assigned_rate_limits": { 00:05:41.587 "rw_ios_per_sec": 0, 00:05:41.587 "rw_mbytes_per_sec": 0, 00:05:41.587 "r_mbytes_per_sec": 0, 00:05:41.587 "w_mbytes_per_sec": 0 00:05:41.587 }, 00:05:41.587 "claimed": false, 00:05:41.587 "zoned": false, 00:05:41.587 "supported_io_types": { 00:05:41.587 "read": true, 00:05:41.587 "write": true, 00:05:41.587 "unmap": true, 00:05:41.587 "flush": true, 00:05:41.587 "reset": true, 00:05:41.587 "nvme_admin": false, 00:05:41.587 "nvme_io": false, 00:05:41.587 "nvme_io_md": false, 00:05:41.587 "write_zeroes": true, 00:05:41.587 "zcopy": true, 00:05:41.587 "get_zone_info": false, 00:05:41.587 "zone_management": false, 00:05:41.587 "zone_append": false, 00:05:41.587 "compare": false, 00:05:41.587 "compare_and_write": false, 00:05:41.587 "abort": true, 00:05:41.587 "seek_hole": false, 00:05:41.587 "seek_data": false, 00:05:41.587 "copy": true, 00:05:41.587 "nvme_iov_md": false 00:05:41.587 }, 00:05:41.587 "memory_domains": [ 00:05:41.587 { 00:05:41.587 "dma_device_id": "system", 00:05:41.587 "dma_device_type": 1 00:05:41.587 }, 00:05:41.587 { 00:05:41.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.587 "dma_device_type": 2 00:05:41.587 } 00:05:41.587 ], 00:05:41.587 "driver_specific": { 00:05:41.587 "passthru": { 00:05:41.587 "name": "Passthru0", 00:05:41.587 "base_bdev_name": "Malloc2" 00:05:41.587 } 00:05:41.587 } 00:05:41.587 } 00:05:41.587 ]' 00:05:41.587 12:25:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:41.847 12:25:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:41.847 12:25:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:41.847 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.847 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.847 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.847 12:25:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:41.847 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.847 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.847 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.847 12:25:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:41.847 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.847 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.847 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.847 12:25:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:41.847 12:25:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:41.847 12:25:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:41.847 00:05:41.847 real 0m0.283s 00:05:41.847 user 0m0.177s 00:05:41.847 sys 0m0.056s 00:05:41.847 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.847 12:25:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.847 ************************************ 00:05:41.847 END TEST rpc_daemon_integrity 00:05:41.847 ************************************ 00:05:41.847 12:25:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:41.847 12:25:47 rpc -- rpc/rpc.sh@84 -- # killprocess 970038 00:05:41.847 12:25:47 rpc -- common/autotest_common.sh@954 -- # '[' -z 970038 ']' 00:05:41.847 12:25:47 rpc -- common/autotest_common.sh@958 -- # kill -0 970038 00:05:41.847 12:25:47 rpc -- common/autotest_common.sh@959 -- # uname 00:05:41.847 12:25:47 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.847 12:25:47 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 970038 00:05:41.847 12:25:47 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.847 12:25:47 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.847 12:25:47 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 970038' 00:05:41.847 killing process with pid 970038 00:05:41.847 12:25:47 rpc -- common/autotest_common.sh@973 -- # kill 970038 00:05:41.847 12:25:47 rpc -- common/autotest_common.sh@978 -- # wait 970038 00:05:42.106 00:05:42.106 real 0m2.144s 00:05:42.106 user 0m2.685s 00:05:42.106 sys 0m0.848s 00:05:42.106 12:25:47 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.106 12:25:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.106 ************************************ 00:05:42.106 END TEST rpc 00:05:42.106 ************************************ 00:05:42.106 12:25:47 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:42.106 12:25:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.106 12:25:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.106 12:25:47 -- common/autotest_common.sh@10 -- # set +x 00:05:42.366 ************************************ 00:05:42.366 START TEST skip_rpc 00:05:42.366 ************************************ 00:05:42.366 12:25:47 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:42.366 * Looking for test storage... 00:05:42.366 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:42.366 12:25:47 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:42.366 12:25:47 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:42.366 12:25:47 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:42.366 12:25:47 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.366 12:25:47 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:42.366 12:25:47 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.366 12:25:47 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:42.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.366 --rc genhtml_branch_coverage=1 00:05:42.366 --rc genhtml_function_coverage=1 00:05:42.366 --rc genhtml_legend=1 00:05:42.366 --rc geninfo_all_blocks=1 00:05:42.366 --rc geninfo_unexecuted_blocks=1 00:05:42.366 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:42.366 ' 00:05:42.366 12:25:47 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:42.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.366 --rc genhtml_branch_coverage=1 00:05:42.366 --rc genhtml_function_coverage=1 00:05:42.366 --rc genhtml_legend=1 00:05:42.366 --rc geninfo_all_blocks=1 00:05:42.366 --rc geninfo_unexecuted_blocks=1 00:05:42.366 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:42.366 ' 00:05:42.366 12:25:47 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:42.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.366 --rc genhtml_branch_coverage=1 00:05:42.366 --rc genhtml_function_coverage=1 00:05:42.366 --rc genhtml_legend=1 00:05:42.366 --rc geninfo_all_blocks=1 00:05:42.366 --rc geninfo_unexecuted_blocks=1 00:05:42.366 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:42.366 ' 00:05:42.366 12:25:47 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:42.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.366 --rc genhtml_branch_coverage=1 00:05:42.366 --rc genhtml_function_coverage=1 00:05:42.366 --rc genhtml_legend=1 00:05:42.366 --rc geninfo_all_blocks=1 00:05:42.366 --rc geninfo_unexecuted_blocks=1 00:05:42.366 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:42.366 ' 00:05:42.366 12:25:47 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:42.366 12:25:47 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:42.366 12:25:47 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:42.366 12:25:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.366 12:25:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.366 12:25:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.626 ************************************ 00:05:42.626 START TEST skip_rpc 00:05:42.626 ************************************ 00:05:42.626 12:25:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:42.626 12:25:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=970513 00:05:42.626 12:25:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.626 12:25:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:42.626 12:25:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:42.626 [2024-12-16 12:25:47.968218] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:42.626 [2024-12-16 12:25:47.968280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid970513 ] 00:05:42.626 [2024-12-16 12:25:48.035880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.626 [2024-12-16 12:25:48.075859] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.900 12:25:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:47.900 12:25:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:47.900 12:25:52 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:47.900 12:25:52 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:47.900 12:25:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.900 12:25:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:47.900 12:25:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.900 12:25:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:47.900 12:25:52 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.900 12:25:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.900 12:25:52 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:47.900 12:25:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:47.900 12:25:52 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:47.900 12:25:52 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:47.900 12:25:52 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:47.900 12:25:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:47.901 12:25:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 970513 00:05:47.901 12:25:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 970513 ']' 00:05:47.901 12:25:52 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 970513 00:05:47.901 12:25:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:47.901 12:25:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.901 12:25:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 970513 00:05:47.901 12:25:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.901 12:25:53 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.901 12:25:53 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 970513' 00:05:47.901 killing process with pid 970513 00:05:47.901 12:25:53 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 970513 00:05:47.901 12:25:53 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 970513 00:05:47.901 00:05:47.901 real 0m5.372s 00:05:47.901 user 0m5.147s 00:05:47.901 sys 0m0.275s 00:05:47.901 12:25:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.901 12:25:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.901 ************************************ 00:05:47.901 END TEST skip_rpc 00:05:47.901 ************************************ 00:05:47.901 12:25:53 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:47.901 12:25:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.901 12:25:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.901 12:25:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.901 ************************************ 00:05:47.901 START TEST skip_rpc_with_json 00:05:47.901 ************************************ 00:05:47.901 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:47.901 12:25:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:47.901 12:25:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.901 12:25:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=971599 00:05:47.901 12:25:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.901 12:25:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 971599 00:05:47.901 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 971599 ']' 00:05:47.901 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.901 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.901 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.901 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.901 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.901 [2024-12-16 12:25:53.402386] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:47.901 [2024-12-16 12:25:53.402432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid971599 ] 00:05:48.161 [2024-12-16 12:25:53.469814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.161 [2024-12-16 12:25:53.513945] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.161 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.161 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:48.161 12:25:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:48.161 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.161 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.161 [2024-12-16 12:25:53.724939] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:48.439 request: 00:05:48.439 { 00:05:48.439 "trtype": "tcp", 00:05:48.439 "method": "nvmf_get_transports", 00:05:48.439 "req_id": 1 00:05:48.439 } 00:05:48.439 Got JSON-RPC error response 00:05:48.439 response: 00:05:48.439 { 00:05:48.439 "code": -19, 00:05:48.439 "message": "No such device" 00:05:48.439 } 00:05:48.439 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:48.439 12:25:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:48.439 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.439 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.439 [2024-12-16 12:25:53.737046] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:48.439 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.439 12:25:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:48.439 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.439 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.439 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.439 12:25:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:48.439 { 00:05:48.439 "subsystems": [ 00:05:48.439 { 00:05:48.439 "subsystem": "scheduler", 00:05:48.439 "config": [ 00:05:48.439 { 00:05:48.439 "method": "framework_set_scheduler", 00:05:48.439 "params": { 00:05:48.439 "name": "static" 00:05:48.439 } 00:05:48.439 } 00:05:48.439 ] 00:05:48.439 }, 00:05:48.439 { 00:05:48.439 "subsystem": "vmd", 00:05:48.439 "config": [] 00:05:48.439 }, 00:05:48.439 { 00:05:48.439 "subsystem": "sock", 00:05:48.439 "config": [ 00:05:48.439 { 00:05:48.439 "method": "sock_set_default_impl", 00:05:48.439 "params": { 00:05:48.439 "impl_name": "posix" 00:05:48.439 } 00:05:48.439 }, 00:05:48.439 { 00:05:48.439 "method": "sock_impl_set_options", 00:05:48.439 "params": { 00:05:48.439 "impl_name": "ssl", 00:05:48.439 "recv_buf_size": 4096, 00:05:48.439 "send_buf_size": 4096, 00:05:48.439 "enable_recv_pipe": true, 00:05:48.439 "enable_quickack": false, 00:05:48.439 "enable_placement_id": 0, 00:05:48.439 "enable_zerocopy_send_server": true, 00:05:48.439 "enable_zerocopy_send_client": false, 00:05:48.439 "zerocopy_threshold": 0, 00:05:48.439 "tls_version": 0, 00:05:48.439 "enable_ktls": false 00:05:48.439 } 00:05:48.439 }, 00:05:48.439 { 00:05:48.439 "method": "sock_impl_set_options", 00:05:48.439 "params": { 00:05:48.439 "impl_name": "posix", 00:05:48.439 "recv_buf_size": 2097152, 00:05:48.439 "send_buf_size": 2097152, 00:05:48.439 "enable_recv_pipe": true, 00:05:48.439 "enable_quickack": false, 00:05:48.439 "enable_placement_id": 0, 00:05:48.439 "enable_zerocopy_send_server": true, 00:05:48.439 "enable_zerocopy_send_client": false, 00:05:48.439 "zerocopy_threshold": 0, 00:05:48.439 "tls_version": 0, 00:05:48.439 "enable_ktls": false 00:05:48.439 } 00:05:48.439 } 00:05:48.439 ] 00:05:48.439 }, 00:05:48.439 { 00:05:48.439 "subsystem": "iobuf", 00:05:48.439 "config": [ 00:05:48.439 { 00:05:48.439 "method": "iobuf_set_options", 00:05:48.439 "params": { 00:05:48.440 "small_pool_count": 8192, 00:05:48.440 "large_pool_count": 1024, 00:05:48.440 "small_bufsize": 8192, 00:05:48.440 "large_bufsize": 135168, 00:05:48.440 "enable_numa": false 00:05:48.440 } 00:05:48.440 } 00:05:48.440 ] 00:05:48.440 }, 00:05:48.440 { 00:05:48.440 "subsystem": "keyring", 00:05:48.440 "config": [] 00:05:48.440 }, 00:05:48.440 { 00:05:48.440 "subsystem": "vfio_user_target", 00:05:48.440 "config": null 00:05:48.440 }, 00:05:48.440 { 00:05:48.440 "subsystem": "fsdev", 00:05:48.440 "config": [ 00:05:48.440 { 00:05:48.440 "method": "fsdev_set_opts", 00:05:48.440 "params": { 00:05:48.440 "fsdev_io_pool_size": 65535, 00:05:48.440 "fsdev_io_cache_size": 256 00:05:48.440 } 00:05:48.440 } 00:05:48.440 ] 00:05:48.440 }, 00:05:48.440 { 00:05:48.440 "subsystem": "accel", 00:05:48.440 "config": [ 00:05:48.440 { 00:05:48.440 "method": "accel_set_options", 00:05:48.440 "params": { 00:05:48.440 "small_cache_size": 128, 00:05:48.440 "large_cache_size": 16, 00:05:48.440 "task_count": 2048, 00:05:48.440 "sequence_count": 2048, 00:05:48.440 "buf_count": 2048 00:05:48.440 } 00:05:48.440 } 00:05:48.440 ] 00:05:48.440 }, 00:05:48.440 { 00:05:48.440 "subsystem": "bdev", 00:05:48.440 "config": [ 00:05:48.440 { 00:05:48.440 "method": "bdev_set_options", 00:05:48.440 "params": { 00:05:48.440 "bdev_io_pool_size": 65535, 00:05:48.440 "bdev_io_cache_size": 256, 00:05:48.440 "bdev_auto_examine": true, 00:05:48.440 "iobuf_small_cache_size": 128, 00:05:48.440 "iobuf_large_cache_size": 16 00:05:48.440 } 00:05:48.440 }, 00:05:48.440 { 00:05:48.440 "method": "bdev_raid_set_options", 00:05:48.440 "params": { 00:05:48.440 "process_window_size_kb": 1024, 00:05:48.440 "process_max_bandwidth_mb_sec": 0 00:05:48.440 } 00:05:48.440 }, 00:05:48.440 { 00:05:48.440 "method": "bdev_nvme_set_options", 00:05:48.440 "params": { 00:05:48.440 "action_on_timeout": "none", 00:05:48.440 "timeout_us": 0, 00:05:48.440 "timeout_admin_us": 0, 00:05:48.440 "keep_alive_timeout_ms": 10000, 00:05:48.440 "arbitration_burst": 0, 00:05:48.440 "low_priority_weight": 0, 00:05:48.440 "medium_priority_weight": 0, 00:05:48.440 "high_priority_weight": 0, 00:05:48.440 "nvme_adminq_poll_period_us": 10000, 00:05:48.440 "nvme_ioq_poll_period_us": 0, 00:05:48.440 "io_queue_requests": 0, 00:05:48.440 "delay_cmd_submit": true, 00:05:48.440 "transport_retry_count": 4, 00:05:48.440 "bdev_retry_count": 3, 00:05:48.440 "transport_ack_timeout": 0, 00:05:48.440 "ctrlr_loss_timeout_sec": 0, 00:05:48.440 "reconnect_delay_sec": 0, 00:05:48.440 "fast_io_fail_timeout_sec": 0, 00:05:48.440 "disable_auto_failback": false, 00:05:48.440 "generate_uuids": false, 00:05:48.440 "transport_tos": 0, 00:05:48.440 "nvme_error_stat": false, 00:05:48.440 "rdma_srq_size": 0, 00:05:48.440 "io_path_stat": false, 00:05:48.440 "allow_accel_sequence": false, 00:05:48.440 "rdma_max_cq_size": 0, 00:05:48.440 "rdma_cm_event_timeout_ms": 0, 00:05:48.440 "dhchap_digests": [ 00:05:48.440 "sha256", 00:05:48.440 "sha384", 00:05:48.440 "sha512" 00:05:48.440 ], 00:05:48.440 "dhchap_dhgroups": [ 00:05:48.440 "null", 00:05:48.440 "ffdhe2048", 00:05:48.440 "ffdhe3072", 00:05:48.440 "ffdhe4096", 00:05:48.440 "ffdhe6144", 00:05:48.440 "ffdhe8192" 00:05:48.440 ], 00:05:48.440 "rdma_umr_per_io": false 00:05:48.440 } 00:05:48.440 }, 00:05:48.440 { 00:05:48.440 "method": "bdev_nvme_set_hotplug", 00:05:48.440 "params": { 00:05:48.440 "period_us": 100000, 00:05:48.440 "enable": false 00:05:48.440 } 00:05:48.440 }, 00:05:48.440 { 00:05:48.440 "method": "bdev_iscsi_set_options", 00:05:48.440 "params": { 00:05:48.440 "timeout_sec": 30 00:05:48.440 } 00:05:48.440 }, 00:05:48.440 { 00:05:48.440 "method": "bdev_wait_for_examine" 00:05:48.440 } 00:05:48.440 ] 00:05:48.440 }, 00:05:48.440 { 00:05:48.440 "subsystem": "nvmf", 00:05:48.440 "config": [ 00:05:48.440 { 00:05:48.440 "method": "nvmf_set_config", 00:05:48.440 "params": { 00:05:48.440 "discovery_filter": "match_any", 00:05:48.440 "admin_cmd_passthru": { 00:05:48.440 "identify_ctrlr": false 00:05:48.440 }, 00:05:48.440 "dhchap_digests": [ 00:05:48.440 "sha256", 00:05:48.440 "sha384", 00:05:48.440 "sha512" 00:05:48.440 ], 00:05:48.440 "dhchap_dhgroups": [ 00:05:48.440 "null", 00:05:48.440 "ffdhe2048", 00:05:48.440 "ffdhe3072", 00:05:48.440 "ffdhe4096", 00:05:48.440 "ffdhe6144", 00:05:48.440 "ffdhe8192" 00:05:48.440 ] 00:05:48.440 } 00:05:48.440 }, 00:05:48.440 { 00:05:48.440 "method": "nvmf_set_max_subsystems", 00:05:48.440 "params": { 00:05:48.440 "max_subsystems": 1024 00:05:48.440 } 00:05:48.440 }, 00:05:48.440 { 00:05:48.440 "method": "nvmf_set_crdt", 00:05:48.440 "params": { 00:05:48.440 "crdt1": 0, 00:05:48.440 "crdt2": 0, 00:05:48.440 "crdt3": 0 00:05:48.440 } 00:05:48.440 }, 00:05:48.440 { 00:05:48.440 "method": "nvmf_create_transport", 00:05:48.440 "params": { 00:05:48.440 "trtype": "TCP", 00:05:48.440 "max_queue_depth": 128, 00:05:48.440 "max_io_qpairs_per_ctrlr": 127, 00:05:48.440 "in_capsule_data_size": 4096, 00:05:48.440 "max_io_size": 131072, 00:05:48.440 "io_unit_size": 131072, 00:05:48.440 "max_aq_depth": 128, 00:05:48.440 "num_shared_buffers": 511, 00:05:48.440 "buf_cache_size": 4294967295, 00:05:48.440 "dif_insert_or_strip": false, 00:05:48.440 "zcopy": false, 00:05:48.440 "c2h_success": true, 00:05:48.440 "sock_priority": 0, 00:05:48.440 "abort_timeout_sec": 1, 00:05:48.440 "ack_timeout": 0, 00:05:48.440 "data_wr_pool_size": 0 00:05:48.440 } 00:05:48.440 } 00:05:48.440 ] 00:05:48.440 }, 00:05:48.440 { 00:05:48.440 "subsystem": "nbd", 00:05:48.440 "config": [] 00:05:48.440 }, 00:05:48.440 { 00:05:48.440 "subsystem": "ublk", 00:05:48.440 "config": [] 00:05:48.440 }, 00:05:48.440 { 00:05:48.440 "subsystem": "vhost_blk", 00:05:48.440 "config": [] 00:05:48.440 }, 00:05:48.440 { 00:05:48.440 "subsystem": "scsi", 00:05:48.440 "config": null 00:05:48.440 }, 00:05:48.440 { 00:05:48.440 "subsystem": "iscsi", 00:05:48.440 "config": [ 00:05:48.440 { 00:05:48.440 "method": "iscsi_set_options", 00:05:48.440 "params": { 00:05:48.440 "node_base": "iqn.2016-06.io.spdk", 00:05:48.440 "max_sessions": 128, 00:05:48.440 "max_connections_per_session": 2, 00:05:48.440 "max_queue_depth": 64, 00:05:48.440 "default_time2wait": 2, 00:05:48.440 "default_time2retain": 20, 00:05:48.440 "first_burst_length": 8192, 00:05:48.440 "immediate_data": true, 00:05:48.440 "allow_duplicated_isid": false, 00:05:48.440 "error_recovery_level": 0, 00:05:48.440 "nop_timeout": 60, 00:05:48.440 "nop_in_interval": 30, 00:05:48.440 "disable_chap": false, 00:05:48.440 "require_chap": false, 00:05:48.440 "mutual_chap": false, 00:05:48.440 "chap_group": 0, 00:05:48.440 "max_large_datain_per_connection": 64, 00:05:48.440 "max_r2t_per_connection": 4, 00:05:48.440 "pdu_pool_size": 36864, 00:05:48.440 "immediate_data_pool_size": 16384, 00:05:48.440 "data_out_pool_size": 2048 00:05:48.441 } 00:05:48.441 } 00:05:48.441 ] 00:05:48.441 }, 00:05:48.441 { 00:05:48.441 "subsystem": "vhost_scsi", 00:05:48.441 "config": [] 00:05:48.441 } 00:05:48.441 ] 00:05:48.441 } 00:05:48.441 12:25:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:48.441 12:25:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 971599 00:05:48.441 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 971599 ']' 00:05:48.441 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 971599 00:05:48.441 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:48.441 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.441 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 971599 00:05:48.441 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.441 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.441 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 971599' 00:05:48.441 killing process with pid 971599 00:05:48.441 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 971599 00:05:48.441 12:25:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 971599 00:05:49.011 12:25:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=971747 00:05:49.011 12:25:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:49.011 12:25:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:54.291 12:25:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 971747 00:05:54.291 12:25:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 971747 ']' 00:05:54.291 12:25:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 971747 00:05:54.291 12:25:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:54.291 12:25:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.291 12:25:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 971747 00:05:54.291 12:25:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.291 12:25:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.291 12:25:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 971747' 00:05:54.291 killing process with pid 971747 00:05:54.291 12:25:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 971747 00:05:54.291 12:25:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 971747 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:54.292 00:05:54.292 real 0m6.262s 00:05:54.292 user 0m5.979s 00:05:54.292 sys 0m0.621s 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.292 ************************************ 00:05:54.292 END TEST skip_rpc_with_json 00:05:54.292 ************************************ 00:05:54.292 12:25:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:54.292 12:25:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.292 12:25:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.292 12:25:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.292 ************************************ 00:05:54.292 START TEST skip_rpc_with_delay 00:05:54.292 ************************************ 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.292 [2024-12-16 12:25:59.750608] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:54.292 00:05:54.292 real 0m0.049s 00:05:54.292 user 0m0.024s 00:05:54.292 sys 0m0.024s 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.292 12:25:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:54.292 ************************************ 00:05:54.292 END TEST skip_rpc_with_delay 00:05:54.292 ************************************ 00:05:54.292 12:25:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:54.292 12:25:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:54.292 12:25:59 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:54.292 12:25:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.292 12:25:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.292 12:25:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.292 ************************************ 00:05:54.292 START TEST exit_on_failed_rpc_init 00:05:54.292 ************************************ 00:05:54.292 12:25:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:54.292 12:25:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=972729 00:05:54.292 12:25:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 972729 00:05:54.292 12:25:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.292 12:25:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 972729 ']' 00:05:54.292 12:25:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.292 12:25:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.292 12:25:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.292 12:25:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.292 12:25:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:54.552 [2024-12-16 12:25:59.876524] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:54.552 [2024-12-16 12:25:59.876580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid972729 ] 00:05:54.552 [2024-12-16 12:25:59.946151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.552 [2024-12-16 12:25:59.988680] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:54.813 [2024-12-16 12:26:00.213488] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:54.813 [2024-12-16 12:26:00.213538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid972749 ] 00:05:54.813 [2024-12-16 12:26:00.281157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.813 [2024-12-16 12:26:00.322035] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.813 [2024-12-16 12:26:00.322106] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:54.813 [2024-12-16 12:26:00.322119] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:54.813 [2024-12-16 12:26:00.322127] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 972729 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 972729 ']' 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 972729 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.813 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 972729 00:05:55.073 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.073 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.073 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 972729' 00:05:55.073 killing process with pid 972729 00:05:55.073 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 972729 00:05:55.073 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 972729 00:05:55.333 00:05:55.333 real 0m0.875s 00:05:55.333 user 0m0.895s 00:05:55.333 sys 0m0.388s 00:05:55.333 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.333 12:26:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:55.333 ************************************ 00:05:55.333 END TEST exit_on_failed_rpc_init 00:05:55.333 ************************************ 00:05:55.333 12:26:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:55.333 00:05:55.333 real 0m13.054s 00:05:55.333 user 0m12.250s 00:05:55.333 sys 0m1.639s 00:05:55.333 12:26:00 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.333 12:26:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.333 ************************************ 00:05:55.333 END TEST skip_rpc 00:05:55.333 ************************************ 00:05:55.333 12:26:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:55.333 12:26:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.333 12:26:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.333 12:26:00 -- common/autotest_common.sh@10 -- # set +x 00:05:55.333 ************************************ 00:05:55.333 START TEST rpc_client 00:05:55.333 ************************************ 00:05:55.333 12:26:00 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:55.594 * Looking for test storage... 00:05:55.594 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:55.594 12:26:00 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:55.594 12:26:00 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:55.594 12:26:00 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:55.594 12:26:01 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.594 12:26:01 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:55.594 12:26:01 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.594 12:26:01 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:55.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.594 --rc genhtml_branch_coverage=1 00:05:55.594 --rc genhtml_function_coverage=1 00:05:55.594 --rc genhtml_legend=1 00:05:55.594 --rc geninfo_all_blocks=1 00:05:55.594 --rc geninfo_unexecuted_blocks=1 00:05:55.594 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:55.594 ' 00:05:55.594 12:26:01 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:55.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.594 --rc genhtml_branch_coverage=1 00:05:55.594 --rc genhtml_function_coverage=1 00:05:55.594 --rc genhtml_legend=1 00:05:55.594 --rc geninfo_all_blocks=1 00:05:55.594 --rc geninfo_unexecuted_blocks=1 00:05:55.594 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:55.594 ' 00:05:55.594 12:26:01 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:55.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.594 --rc genhtml_branch_coverage=1 00:05:55.594 --rc genhtml_function_coverage=1 00:05:55.594 --rc genhtml_legend=1 00:05:55.594 --rc geninfo_all_blocks=1 00:05:55.594 --rc geninfo_unexecuted_blocks=1 00:05:55.594 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:55.594 ' 00:05:55.594 12:26:01 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:55.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.594 --rc genhtml_branch_coverage=1 00:05:55.594 --rc genhtml_function_coverage=1 00:05:55.594 --rc genhtml_legend=1 00:05:55.594 --rc geninfo_all_blocks=1 00:05:55.594 --rc geninfo_unexecuted_blocks=1 00:05:55.594 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:55.594 ' 00:05:55.594 12:26:01 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:55.594 OK 00:05:55.594 12:26:01 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:55.594 00:05:55.594 real 0m0.205s 00:05:55.594 user 0m0.120s 00:05:55.594 sys 0m0.103s 00:05:55.594 12:26:01 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.594 12:26:01 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:55.594 ************************************ 00:05:55.594 END TEST rpc_client 00:05:55.594 ************************************ 00:05:55.594 12:26:01 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:55.594 12:26:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.594 12:26:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.594 12:26:01 -- common/autotest_common.sh@10 -- # set +x 00:05:55.594 ************************************ 00:05:55.594 START TEST json_config 00:05:55.594 ************************************ 00:05:55.594 12:26:01 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:55.855 12:26:01 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:55.855 12:26:01 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:55.855 12:26:01 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:55.855 12:26:01 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:55.855 12:26:01 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.855 12:26:01 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.855 12:26:01 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.855 12:26:01 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.855 12:26:01 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.855 12:26:01 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.855 12:26:01 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.855 12:26:01 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.855 12:26:01 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.855 12:26:01 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.855 12:26:01 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.855 12:26:01 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:55.855 12:26:01 json_config -- scripts/common.sh@345 -- # : 1 00:05:55.855 12:26:01 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.855 12:26:01 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.855 12:26:01 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:55.855 12:26:01 json_config -- scripts/common.sh@353 -- # local d=1 00:05:55.855 12:26:01 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.855 12:26:01 json_config -- scripts/common.sh@355 -- # echo 1 00:05:55.855 12:26:01 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.855 12:26:01 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:55.855 12:26:01 json_config -- scripts/common.sh@353 -- # local d=2 00:05:55.855 12:26:01 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.855 12:26:01 json_config -- scripts/common.sh@355 -- # echo 2 00:05:55.855 12:26:01 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.855 12:26:01 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.855 12:26:01 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.855 12:26:01 json_config -- scripts/common.sh@368 -- # return 0 00:05:55.855 12:26:01 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.855 12:26:01 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:55.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.855 --rc genhtml_branch_coverage=1 00:05:55.855 --rc genhtml_function_coverage=1 00:05:55.855 --rc genhtml_legend=1 00:05:55.855 --rc geninfo_all_blocks=1 00:05:55.855 --rc geninfo_unexecuted_blocks=1 00:05:55.855 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:55.855 ' 00:05:55.855 12:26:01 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:55.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.855 --rc genhtml_branch_coverage=1 00:05:55.855 --rc genhtml_function_coverage=1 00:05:55.855 --rc genhtml_legend=1 00:05:55.855 --rc geninfo_all_blocks=1 00:05:55.855 --rc geninfo_unexecuted_blocks=1 00:05:55.855 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:55.855 ' 00:05:55.855 12:26:01 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:55.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.855 --rc genhtml_branch_coverage=1 00:05:55.855 --rc genhtml_function_coverage=1 00:05:55.855 --rc genhtml_legend=1 00:05:55.855 --rc geninfo_all_blocks=1 00:05:55.855 --rc geninfo_unexecuted_blocks=1 00:05:55.855 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:55.855 ' 00:05:55.855 12:26:01 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:55.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.855 --rc genhtml_branch_coverage=1 00:05:55.855 --rc genhtml_function_coverage=1 00:05:55.855 --rc genhtml_legend=1 00:05:55.855 --rc geninfo_all_blocks=1 00:05:55.855 --rc geninfo_unexecuted_blocks=1 00:05:55.855 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:55.855 ' 00:05:55.855 12:26:01 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:55.855 12:26:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:55.856 12:26:01 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:55.856 12:26:01 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.856 12:26:01 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.856 12:26:01 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.856 12:26:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.856 12:26:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.856 12:26:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.856 12:26:01 json_config -- paths/export.sh@5 -- # export PATH 00:05:55.856 12:26:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@51 -- # : 0 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:55.856 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:55.856 12:26:01 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:55.856 12:26:01 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:55.856 12:26:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:55.856 12:26:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:55.856 12:26:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:55.856 12:26:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:55.856 12:26:01 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:55.856 WARNING: No tests are enabled so not running JSON configuration tests 00:05:55.856 12:26:01 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:55.856 00:05:55.856 real 0m0.185s 00:05:55.856 user 0m0.106s 00:05:55.856 sys 0m0.084s 00:05:55.856 12:26:01 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.856 12:26:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.856 ************************************ 00:05:55.856 END TEST json_config 00:05:55.856 ************************************ 00:05:55.856 12:26:01 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:55.856 12:26:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.856 12:26:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.856 12:26:01 -- common/autotest_common.sh@10 -- # set +x 00:05:55.856 ************************************ 00:05:55.856 START TEST json_config_extra_key 00:05:55.856 ************************************ 00:05:55.856 12:26:01 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:56.118 12:26:01 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:56.118 12:26:01 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:56.118 12:26:01 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:56.118 12:26:01 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:56.118 12:26:01 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.118 12:26:01 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:56.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.118 --rc genhtml_branch_coverage=1 00:05:56.118 --rc genhtml_function_coverage=1 00:05:56.118 --rc genhtml_legend=1 00:05:56.118 --rc geninfo_all_blocks=1 00:05:56.118 --rc geninfo_unexecuted_blocks=1 00:05:56.118 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:56.118 ' 00:05:56.118 12:26:01 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:56.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.118 --rc genhtml_branch_coverage=1 00:05:56.118 --rc genhtml_function_coverage=1 00:05:56.118 --rc genhtml_legend=1 00:05:56.118 --rc geninfo_all_blocks=1 00:05:56.118 --rc geninfo_unexecuted_blocks=1 00:05:56.118 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:56.118 ' 00:05:56.118 12:26:01 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:56.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.118 --rc genhtml_branch_coverage=1 00:05:56.118 --rc genhtml_function_coverage=1 00:05:56.118 --rc genhtml_legend=1 00:05:56.118 --rc geninfo_all_blocks=1 00:05:56.118 --rc geninfo_unexecuted_blocks=1 00:05:56.118 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:56.118 ' 00:05:56.118 12:26:01 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:56.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.118 --rc genhtml_branch_coverage=1 00:05:56.118 --rc genhtml_function_coverage=1 00:05:56.118 --rc genhtml_legend=1 00:05:56.118 --rc geninfo_all_blocks=1 00:05:56.118 --rc geninfo_unexecuted_blocks=1 00:05:56.118 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:56.118 ' 00:05:56.118 12:26:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:56.118 12:26:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:56.118 12:26:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.118 12:26:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.118 12:26:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.118 12:26:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.118 12:26:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.118 12:26:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.118 12:26:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.118 12:26:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.118 12:26:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.118 12:26:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.118 12:26:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:56.118 12:26:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:56.118 12:26:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.118 12:26:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.118 12:26:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:56.118 12:26:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.118 12:26:01 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.118 12:26:01 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.118 12:26:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.118 12:26:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.118 12:26:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.119 12:26:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:56.119 12:26:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.119 12:26:01 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:56.119 12:26:01 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:56.119 12:26:01 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:56.119 12:26:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.119 12:26:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.119 12:26:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.119 12:26:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:56.119 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:56.119 12:26:01 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:56.119 12:26:01 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:56.119 12:26:01 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:56.119 12:26:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:56.119 12:26:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:56.119 12:26:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:56.119 12:26:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:56.119 12:26:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:56.119 12:26:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:56.119 12:26:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:56.119 12:26:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:56.119 12:26:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:56.119 12:26:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:56.119 12:26:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:56.119 INFO: launching applications... 00:05:56.119 12:26:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:56.119 12:26:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:56.119 12:26:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:56.119 12:26:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:56.119 12:26:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:56.119 12:26:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:56.119 12:26:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.119 12:26:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.119 12:26:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=973176 00:05:56.119 12:26:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:56.119 Waiting for target to run... 00:05:56.119 12:26:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 973176 /var/tmp/spdk_tgt.sock 00:05:56.119 12:26:01 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 973176 ']' 00:05:56.119 12:26:01 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:56.119 12:26:01 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:56.119 12:26:01 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.119 12:26:01 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:56.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:56.119 12:26:01 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.119 12:26:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:56.119 [2024-12-16 12:26:01.618399] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:56.119 [2024-12-16 12:26:01.618467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid973176 ] 00:05:56.379 [2024-12-16 12:26:01.913827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.639 [2024-12-16 12:26:01.947833] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.207 12:26:02 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.207 12:26:02 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:57.207 12:26:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:57.207 00:05:57.207 12:26:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:57.208 INFO: shutting down applications... 00:05:57.208 12:26:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:57.208 12:26:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:57.208 12:26:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:57.208 12:26:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 973176 ]] 00:05:57.208 12:26:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 973176 00:05:57.208 12:26:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:57.208 12:26:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:57.208 12:26:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 973176 00:05:57.208 12:26:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:57.467 12:26:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:57.467 12:26:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:57.467 12:26:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 973176 00:05:57.467 12:26:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:57.467 12:26:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:57.467 12:26:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:57.467 12:26:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:57.467 SPDK target shutdown done 00:05:57.467 12:26:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:57.467 Success 00:05:57.467 00:05:57.467 real 0m1.583s 00:05:57.467 user 0m1.307s 00:05:57.467 sys 0m0.434s 00:05:57.467 12:26:02 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.467 12:26:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:57.467 ************************************ 00:05:57.467 END TEST json_config_extra_key 00:05:57.467 ************************************ 00:05:57.467 12:26:03 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:57.467 12:26:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.467 12:26:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.467 12:26:03 -- common/autotest_common.sh@10 -- # set +x 00:05:57.727 ************************************ 00:05:57.727 START TEST alias_rpc 00:05:57.727 ************************************ 00:05:57.727 12:26:03 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:57.727 * Looking for test storage... 00:05:57.727 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:57.727 12:26:03 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:57.727 12:26:03 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:57.727 12:26:03 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:57.727 12:26:03 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.727 12:26:03 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:57.727 12:26:03 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.727 12:26:03 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:57.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.727 --rc genhtml_branch_coverage=1 00:05:57.727 --rc genhtml_function_coverage=1 00:05:57.727 --rc genhtml_legend=1 00:05:57.727 --rc geninfo_all_blocks=1 00:05:57.727 --rc geninfo_unexecuted_blocks=1 00:05:57.727 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:57.727 ' 00:05:57.727 12:26:03 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:57.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.727 --rc genhtml_branch_coverage=1 00:05:57.727 --rc genhtml_function_coverage=1 00:05:57.727 --rc genhtml_legend=1 00:05:57.727 --rc geninfo_all_blocks=1 00:05:57.727 --rc geninfo_unexecuted_blocks=1 00:05:57.727 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:57.727 ' 00:05:57.727 12:26:03 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:57.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.727 --rc genhtml_branch_coverage=1 00:05:57.727 --rc genhtml_function_coverage=1 00:05:57.727 --rc genhtml_legend=1 00:05:57.727 --rc geninfo_all_blocks=1 00:05:57.727 --rc geninfo_unexecuted_blocks=1 00:05:57.727 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:57.727 ' 00:05:57.727 12:26:03 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:57.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.727 --rc genhtml_branch_coverage=1 00:05:57.727 --rc genhtml_function_coverage=1 00:05:57.727 --rc genhtml_legend=1 00:05:57.727 --rc geninfo_all_blocks=1 00:05:57.727 --rc geninfo_unexecuted_blocks=1 00:05:57.727 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:57.728 ' 00:05:57.728 12:26:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:57.728 12:26:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=973498 00:05:57.728 12:26:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 973498 00:05:57.728 12:26:03 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 973498 ']' 00:05:57.728 12:26:03 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.728 12:26:03 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.728 12:26:03 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.728 12:26:03 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.728 12:26:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.728 12:26:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.728 [2024-12-16 12:26:03.265381] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:57.728 [2024-12-16 12:26:03.265447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid973498 ] 00:05:57.988 [2024-12-16 12:26:03.336480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.988 [2024-12-16 12:26:03.379286] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.247 12:26:03 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.247 12:26:03 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:58.248 12:26:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:58.248 12:26:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 973498 00:05:58.248 12:26:03 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 973498 ']' 00:05:58.248 12:26:03 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 973498 00:05:58.248 12:26:03 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:58.248 12:26:03 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.248 12:26:03 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 973498 00:05:58.507 12:26:03 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.507 12:26:03 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.507 12:26:03 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 973498' 00:05:58.507 killing process with pid 973498 00:05:58.507 12:26:03 alias_rpc -- common/autotest_common.sh@973 -- # kill 973498 00:05:58.507 12:26:03 alias_rpc -- common/autotest_common.sh@978 -- # wait 973498 00:05:58.768 00:05:58.768 real 0m1.086s 00:05:58.768 user 0m1.089s 00:05:58.768 sys 0m0.416s 00:05:58.768 12:26:04 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.768 12:26:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.768 ************************************ 00:05:58.768 END TEST alias_rpc 00:05:58.768 ************************************ 00:05:58.768 12:26:04 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:58.768 12:26:04 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:58.768 12:26:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.768 12:26:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.768 12:26:04 -- common/autotest_common.sh@10 -- # set +x 00:05:58.768 ************************************ 00:05:58.768 START TEST spdkcli_tcp 00:05:58.768 ************************************ 00:05:58.768 12:26:04 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:58.768 * Looking for test storage... 00:05:58.768 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:59.029 12:26:04 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:59.029 12:26:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:59.029 12:26:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:59.029 12:26:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.029 12:26:04 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:59.029 12:26:04 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.029 12:26:04 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:59.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.029 --rc genhtml_branch_coverage=1 00:05:59.029 --rc genhtml_function_coverage=1 00:05:59.029 --rc genhtml_legend=1 00:05:59.029 --rc geninfo_all_blocks=1 00:05:59.029 --rc geninfo_unexecuted_blocks=1 00:05:59.029 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:59.029 ' 00:05:59.029 12:26:04 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:59.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.029 --rc genhtml_branch_coverage=1 00:05:59.029 --rc genhtml_function_coverage=1 00:05:59.029 --rc genhtml_legend=1 00:05:59.029 --rc geninfo_all_blocks=1 00:05:59.029 --rc geninfo_unexecuted_blocks=1 00:05:59.029 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:59.029 ' 00:05:59.029 12:26:04 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:59.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.029 --rc genhtml_branch_coverage=1 00:05:59.029 --rc genhtml_function_coverage=1 00:05:59.029 --rc genhtml_legend=1 00:05:59.029 --rc geninfo_all_blocks=1 00:05:59.029 --rc geninfo_unexecuted_blocks=1 00:05:59.029 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:59.029 ' 00:05:59.029 12:26:04 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:59.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.029 --rc genhtml_branch_coverage=1 00:05:59.029 --rc genhtml_function_coverage=1 00:05:59.029 --rc genhtml_legend=1 00:05:59.029 --rc geninfo_all_blocks=1 00:05:59.029 --rc geninfo_unexecuted_blocks=1 00:05:59.029 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:59.029 ' 00:05:59.029 12:26:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:59.029 12:26:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:59.029 12:26:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:59.029 12:26:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:59.029 12:26:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:59.029 12:26:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:59.029 12:26:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:59.029 12:26:04 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:59.029 12:26:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.029 12:26:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=973821 00:05:59.029 12:26:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:59.029 12:26:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 973821 00:05:59.029 12:26:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 973821 ']' 00:05:59.029 12:26:04 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.029 12:26:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.029 12:26:04 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.029 12:26:04 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.029 12:26:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.029 [2024-12-16 12:26:04.453514] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:05:59.029 [2024-12-16 12:26:04.453586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid973821 ] 00:05:59.029 [2024-12-16 12:26:04.520955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.029 [2024-12-16 12:26:04.567629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.029 [2024-12-16 12:26:04.567633] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.289 12:26:04 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.289 12:26:04 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:59.289 12:26:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=973831 00:05:59.289 12:26:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:59.289 12:26:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:59.550 [ 00:05:59.550 "spdk_get_version", 00:05:59.550 "rpc_get_methods", 00:05:59.550 "notify_get_notifications", 00:05:59.550 "notify_get_types", 00:05:59.550 "trace_get_info", 00:05:59.550 "trace_get_tpoint_group_mask", 00:05:59.550 "trace_disable_tpoint_group", 00:05:59.550 "trace_enable_tpoint_group", 00:05:59.550 "trace_clear_tpoint_mask", 00:05:59.550 "trace_set_tpoint_mask", 00:05:59.550 "fsdev_set_opts", 00:05:59.550 "fsdev_get_opts", 00:05:59.550 "framework_get_pci_devices", 00:05:59.550 "framework_get_config", 00:05:59.550 "framework_get_subsystems", 00:05:59.550 "vfu_tgt_set_base_path", 00:05:59.550 "keyring_get_keys", 00:05:59.550 "iobuf_get_stats", 00:05:59.550 "iobuf_set_options", 00:05:59.550 "sock_get_default_impl", 00:05:59.550 "sock_set_default_impl", 00:05:59.550 "sock_impl_set_options", 00:05:59.550 "sock_impl_get_options", 00:05:59.550 "vmd_rescan", 00:05:59.550 "vmd_remove_device", 00:05:59.550 "vmd_enable", 00:05:59.550 "accel_get_stats", 00:05:59.550 "accel_set_options", 00:05:59.550 "accel_set_driver", 00:05:59.550 "accel_crypto_key_destroy", 00:05:59.550 "accel_crypto_keys_get", 00:05:59.550 "accel_crypto_key_create", 00:05:59.550 "accel_assign_opc", 00:05:59.550 "accel_get_module_info", 00:05:59.550 "accel_get_opc_assignments", 00:05:59.550 "bdev_get_histogram", 00:05:59.550 "bdev_enable_histogram", 00:05:59.550 "bdev_set_qos_limit", 00:05:59.550 "bdev_set_qd_sampling_period", 00:05:59.550 "bdev_get_bdevs", 00:05:59.550 "bdev_reset_iostat", 00:05:59.550 "bdev_get_iostat", 00:05:59.550 "bdev_examine", 00:05:59.550 "bdev_wait_for_examine", 00:05:59.550 "bdev_set_options", 00:05:59.550 "scsi_get_devices", 00:05:59.550 "thread_set_cpumask", 00:05:59.550 "scheduler_set_options", 00:05:59.551 "framework_get_governor", 00:05:59.551 "framework_get_scheduler", 00:05:59.551 "framework_set_scheduler", 00:05:59.551 "framework_get_reactors", 00:05:59.551 "thread_get_io_channels", 00:05:59.551 "thread_get_pollers", 00:05:59.551 "thread_get_stats", 00:05:59.551 "framework_monitor_context_switch", 00:05:59.551 "spdk_kill_instance", 00:05:59.551 "log_enable_timestamps", 00:05:59.551 "log_get_flags", 00:05:59.551 "log_clear_flag", 00:05:59.551 "log_set_flag", 00:05:59.551 "log_get_level", 00:05:59.551 "log_set_level", 00:05:59.551 "log_get_print_level", 00:05:59.551 "log_set_print_level", 00:05:59.551 "framework_enable_cpumask_locks", 00:05:59.551 "framework_disable_cpumask_locks", 00:05:59.551 "framework_wait_init", 00:05:59.551 "framework_start_init", 00:05:59.551 "virtio_blk_create_transport", 00:05:59.551 "virtio_blk_get_transports", 00:05:59.551 "vhost_controller_set_coalescing", 00:05:59.551 "vhost_get_controllers", 00:05:59.551 "vhost_delete_controller", 00:05:59.551 "vhost_create_blk_controller", 00:05:59.551 "vhost_scsi_controller_remove_target", 00:05:59.551 "vhost_scsi_controller_add_target", 00:05:59.551 "vhost_start_scsi_controller", 00:05:59.551 "vhost_create_scsi_controller", 00:05:59.551 "ublk_recover_disk", 00:05:59.551 "ublk_get_disks", 00:05:59.551 "ublk_stop_disk", 00:05:59.551 "ublk_start_disk", 00:05:59.551 "ublk_destroy_target", 00:05:59.551 "ublk_create_target", 00:05:59.551 "nbd_get_disks", 00:05:59.551 "nbd_stop_disk", 00:05:59.551 "nbd_start_disk", 00:05:59.551 "env_dpdk_get_mem_stats", 00:05:59.551 "nvmf_stop_mdns_prr", 00:05:59.551 "nvmf_publish_mdns_prr", 00:05:59.551 "nvmf_subsystem_get_listeners", 00:05:59.551 "nvmf_subsystem_get_qpairs", 00:05:59.551 "nvmf_subsystem_get_controllers", 00:05:59.551 "nvmf_get_stats", 00:05:59.551 "nvmf_get_transports", 00:05:59.551 "nvmf_create_transport", 00:05:59.551 "nvmf_get_targets", 00:05:59.551 "nvmf_delete_target", 00:05:59.551 "nvmf_create_target", 00:05:59.551 "nvmf_subsystem_allow_any_host", 00:05:59.551 "nvmf_subsystem_set_keys", 00:05:59.551 "nvmf_subsystem_remove_host", 00:05:59.551 "nvmf_subsystem_add_host", 00:05:59.551 "nvmf_ns_remove_host", 00:05:59.551 "nvmf_ns_add_host", 00:05:59.551 "nvmf_subsystem_remove_ns", 00:05:59.551 "nvmf_subsystem_set_ns_ana_group", 00:05:59.551 "nvmf_subsystem_add_ns", 00:05:59.551 "nvmf_subsystem_listener_set_ana_state", 00:05:59.551 "nvmf_discovery_get_referrals", 00:05:59.551 "nvmf_discovery_remove_referral", 00:05:59.551 "nvmf_discovery_add_referral", 00:05:59.551 "nvmf_subsystem_remove_listener", 00:05:59.551 "nvmf_subsystem_add_listener", 00:05:59.551 "nvmf_delete_subsystem", 00:05:59.551 "nvmf_create_subsystem", 00:05:59.551 "nvmf_get_subsystems", 00:05:59.551 "nvmf_set_crdt", 00:05:59.551 "nvmf_set_config", 00:05:59.551 "nvmf_set_max_subsystems", 00:05:59.551 "iscsi_get_histogram", 00:05:59.551 "iscsi_enable_histogram", 00:05:59.551 "iscsi_set_options", 00:05:59.551 "iscsi_get_auth_groups", 00:05:59.551 "iscsi_auth_group_remove_secret", 00:05:59.551 "iscsi_auth_group_add_secret", 00:05:59.551 "iscsi_delete_auth_group", 00:05:59.551 "iscsi_create_auth_group", 00:05:59.551 "iscsi_set_discovery_auth", 00:05:59.551 "iscsi_get_options", 00:05:59.551 "iscsi_target_node_request_logout", 00:05:59.551 "iscsi_target_node_set_redirect", 00:05:59.551 "iscsi_target_node_set_auth", 00:05:59.551 "iscsi_target_node_add_lun", 00:05:59.551 "iscsi_get_stats", 00:05:59.551 "iscsi_get_connections", 00:05:59.551 "iscsi_portal_group_set_auth", 00:05:59.551 "iscsi_start_portal_group", 00:05:59.551 "iscsi_delete_portal_group", 00:05:59.551 "iscsi_create_portal_group", 00:05:59.551 "iscsi_get_portal_groups", 00:05:59.551 "iscsi_delete_target_node", 00:05:59.551 "iscsi_target_node_remove_pg_ig_maps", 00:05:59.551 "iscsi_target_node_add_pg_ig_maps", 00:05:59.551 "iscsi_create_target_node", 00:05:59.551 "iscsi_get_target_nodes", 00:05:59.551 "iscsi_delete_initiator_group", 00:05:59.551 "iscsi_initiator_group_remove_initiators", 00:05:59.551 "iscsi_initiator_group_add_initiators", 00:05:59.551 "iscsi_create_initiator_group", 00:05:59.551 "iscsi_get_initiator_groups", 00:05:59.551 "fsdev_aio_delete", 00:05:59.551 "fsdev_aio_create", 00:05:59.551 "keyring_linux_set_options", 00:05:59.551 "keyring_file_remove_key", 00:05:59.551 "keyring_file_add_key", 00:05:59.551 "vfu_virtio_create_fs_endpoint", 00:05:59.551 "vfu_virtio_create_scsi_endpoint", 00:05:59.551 "vfu_virtio_scsi_remove_target", 00:05:59.551 "vfu_virtio_scsi_add_target", 00:05:59.551 "vfu_virtio_create_blk_endpoint", 00:05:59.551 "vfu_virtio_delete_endpoint", 00:05:59.551 "iaa_scan_accel_module", 00:05:59.551 "dsa_scan_accel_module", 00:05:59.551 "ioat_scan_accel_module", 00:05:59.551 "accel_error_inject_error", 00:05:59.551 "bdev_iscsi_delete", 00:05:59.551 "bdev_iscsi_create", 00:05:59.551 "bdev_iscsi_set_options", 00:05:59.551 "bdev_virtio_attach_controller", 00:05:59.551 "bdev_virtio_scsi_get_devices", 00:05:59.551 "bdev_virtio_detach_controller", 00:05:59.551 "bdev_virtio_blk_set_hotplug", 00:05:59.551 "bdev_ftl_set_property", 00:05:59.551 "bdev_ftl_get_properties", 00:05:59.551 "bdev_ftl_get_stats", 00:05:59.551 "bdev_ftl_unmap", 00:05:59.551 "bdev_ftl_unload", 00:05:59.551 "bdev_ftl_delete", 00:05:59.551 "bdev_ftl_load", 00:05:59.551 "bdev_ftl_create", 00:05:59.551 "bdev_aio_delete", 00:05:59.551 "bdev_aio_rescan", 00:05:59.551 "bdev_aio_create", 00:05:59.551 "blobfs_create", 00:05:59.551 "blobfs_detect", 00:05:59.551 "blobfs_set_cache_size", 00:05:59.551 "bdev_zone_block_delete", 00:05:59.551 "bdev_zone_block_create", 00:05:59.551 "bdev_delay_delete", 00:05:59.551 "bdev_delay_create", 00:05:59.551 "bdev_delay_update_latency", 00:05:59.551 "bdev_split_delete", 00:05:59.551 "bdev_split_create", 00:05:59.551 "bdev_error_inject_error", 00:05:59.551 "bdev_error_delete", 00:05:59.551 "bdev_error_create", 00:05:59.551 "bdev_raid_set_options", 00:05:59.551 "bdev_raid_remove_base_bdev", 00:05:59.551 "bdev_raid_add_base_bdev", 00:05:59.551 "bdev_raid_delete", 00:05:59.551 "bdev_raid_create", 00:05:59.551 "bdev_raid_get_bdevs", 00:05:59.551 "bdev_lvol_set_parent_bdev", 00:05:59.551 "bdev_lvol_set_parent", 00:05:59.551 "bdev_lvol_check_shallow_copy", 00:05:59.551 "bdev_lvol_start_shallow_copy", 00:05:59.551 "bdev_lvol_grow_lvstore", 00:05:59.551 "bdev_lvol_get_lvols", 00:05:59.551 "bdev_lvol_get_lvstores", 00:05:59.551 "bdev_lvol_delete", 00:05:59.551 "bdev_lvol_set_read_only", 00:05:59.551 "bdev_lvol_resize", 00:05:59.551 "bdev_lvol_decouple_parent", 00:05:59.551 "bdev_lvol_inflate", 00:05:59.551 "bdev_lvol_rename", 00:05:59.551 "bdev_lvol_clone_bdev", 00:05:59.551 "bdev_lvol_clone", 00:05:59.551 "bdev_lvol_snapshot", 00:05:59.551 "bdev_lvol_create", 00:05:59.551 "bdev_lvol_delete_lvstore", 00:05:59.551 "bdev_lvol_rename_lvstore", 00:05:59.551 "bdev_lvol_create_lvstore", 00:05:59.551 "bdev_passthru_delete", 00:05:59.551 "bdev_passthru_create", 00:05:59.551 "bdev_nvme_cuse_unregister", 00:05:59.551 "bdev_nvme_cuse_register", 00:05:59.551 "bdev_opal_new_user", 00:05:59.551 "bdev_opal_set_lock_state", 00:05:59.551 "bdev_opal_delete", 00:05:59.551 "bdev_opal_get_info", 00:05:59.551 "bdev_opal_create", 00:05:59.551 "bdev_nvme_opal_revert", 00:05:59.551 "bdev_nvme_opal_init", 00:05:59.551 "bdev_nvme_send_cmd", 00:05:59.551 "bdev_nvme_set_keys", 00:05:59.551 "bdev_nvme_get_path_iostat", 00:05:59.551 "bdev_nvme_get_mdns_discovery_info", 00:05:59.551 "bdev_nvme_stop_mdns_discovery", 00:05:59.551 "bdev_nvme_start_mdns_discovery", 00:05:59.551 "bdev_nvme_set_multipath_policy", 00:05:59.551 "bdev_nvme_set_preferred_path", 00:05:59.551 "bdev_nvme_get_io_paths", 00:05:59.551 "bdev_nvme_remove_error_injection", 00:05:59.551 "bdev_nvme_add_error_injection", 00:05:59.551 "bdev_nvme_get_discovery_info", 00:05:59.551 "bdev_nvme_stop_discovery", 00:05:59.551 "bdev_nvme_start_discovery", 00:05:59.551 "bdev_nvme_get_controller_health_info", 00:05:59.551 "bdev_nvme_disable_controller", 00:05:59.551 "bdev_nvme_enable_controller", 00:05:59.551 "bdev_nvme_reset_controller", 00:05:59.551 "bdev_nvme_get_transport_statistics", 00:05:59.551 "bdev_nvme_apply_firmware", 00:05:59.551 "bdev_nvme_detach_controller", 00:05:59.551 "bdev_nvme_get_controllers", 00:05:59.551 "bdev_nvme_attach_controller", 00:05:59.551 "bdev_nvme_set_hotplug", 00:05:59.551 "bdev_nvme_set_options", 00:05:59.551 "bdev_null_resize", 00:05:59.551 "bdev_null_delete", 00:05:59.551 "bdev_null_create", 00:05:59.551 "bdev_malloc_delete", 00:05:59.551 "bdev_malloc_create" 00:05:59.551 ] 00:05:59.551 12:26:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:59.551 12:26:04 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:59.551 12:26:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.551 12:26:05 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:59.551 12:26:05 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 973821 00:05:59.551 12:26:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 973821 ']' 00:05:59.551 12:26:05 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 973821 00:05:59.551 12:26:05 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:59.551 12:26:05 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.551 12:26:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 973821 00:05:59.551 12:26:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.551 12:26:05 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.551 12:26:05 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 973821' 00:05:59.551 killing process with pid 973821 00:05:59.551 12:26:05 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 973821 00:05:59.551 12:26:05 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 973821 00:05:59.811 00:05:59.811 real 0m1.147s 00:05:59.811 user 0m1.914s 00:05:59.811 sys 0m0.471s 00:05:59.811 12:26:05 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.811 12:26:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.811 ************************************ 00:05:59.811 END TEST spdkcli_tcp 00:05:59.811 ************************************ 00:06:00.071 12:26:05 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:00.071 12:26:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.071 12:26:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.071 12:26:05 -- common/autotest_common.sh@10 -- # set +x 00:06:00.071 ************************************ 00:06:00.071 START TEST dpdk_mem_utility 00:06:00.071 ************************************ 00:06:00.071 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:00.071 * Looking for test storage... 00:06:00.071 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:06:00.071 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:00.071 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:00.071 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:00.071 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.071 12:26:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:00.332 12:26:05 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.332 12:26:05 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.332 12:26:05 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.332 12:26:05 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:00.332 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.332 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:00.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.332 --rc genhtml_branch_coverage=1 00:06:00.332 --rc genhtml_function_coverage=1 00:06:00.332 --rc genhtml_legend=1 00:06:00.332 --rc geninfo_all_blocks=1 00:06:00.332 --rc geninfo_unexecuted_blocks=1 00:06:00.332 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:00.332 ' 00:06:00.332 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:00.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.332 --rc genhtml_branch_coverage=1 00:06:00.332 --rc genhtml_function_coverage=1 00:06:00.332 --rc genhtml_legend=1 00:06:00.332 --rc geninfo_all_blocks=1 00:06:00.332 --rc geninfo_unexecuted_blocks=1 00:06:00.332 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:00.332 ' 00:06:00.332 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:00.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.332 --rc genhtml_branch_coverage=1 00:06:00.332 --rc genhtml_function_coverage=1 00:06:00.332 --rc genhtml_legend=1 00:06:00.332 --rc geninfo_all_blocks=1 00:06:00.332 --rc geninfo_unexecuted_blocks=1 00:06:00.332 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:00.332 ' 00:06:00.332 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:00.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.332 --rc genhtml_branch_coverage=1 00:06:00.332 --rc genhtml_function_coverage=1 00:06:00.332 --rc genhtml_legend=1 00:06:00.332 --rc geninfo_all_blocks=1 00:06:00.332 --rc geninfo_unexecuted_blocks=1 00:06:00.332 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:00.332 ' 00:06:00.332 12:26:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:00.332 12:26:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=974158 00:06:00.332 12:26:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 974158 00:06:00.332 12:26:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.332 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 974158 ']' 00:06:00.332 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.332 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.332 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.332 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.332 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:00.332 [2024-12-16 12:26:05.664593] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:00.332 [2024-12-16 12:26:05.664669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid974158 ] 00:06:00.332 [2024-12-16 12:26:05.734113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.332 [2024-12-16 12:26:05.773170] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.593 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.593 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:00.593 12:26:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:00.593 12:26:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:00.593 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.593 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:00.593 { 00:06:00.593 "filename": "/tmp/spdk_mem_dump.txt" 00:06:00.593 } 00:06:00.593 12:26:05 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.593 12:26:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:00.593 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:00.593 1 heaps totaling size 818.000000 MiB 00:06:00.593 size: 818.000000 MiB heap id: 0 00:06:00.593 end heaps---------- 00:06:00.593 9 mempools totaling size 603.782043 MiB 00:06:00.593 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:00.593 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:00.593 size: 100.555481 MiB name: bdev_io_974158 00:06:00.593 size: 50.003479 MiB name: msgpool_974158 00:06:00.593 size: 36.509338 MiB name: fsdev_io_974158 00:06:00.593 size: 21.763794 MiB name: PDU_Pool 00:06:00.593 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:00.593 size: 4.133484 MiB name: evtpool_974158 00:06:00.593 size: 0.026123 MiB name: Session_Pool 00:06:00.593 end mempools------- 00:06:00.593 6 memzones totaling size 4.142822 MiB 00:06:00.593 size: 1.000366 MiB name: RG_ring_0_974158 00:06:00.593 size: 1.000366 MiB name: RG_ring_1_974158 00:06:00.593 size: 1.000366 MiB name: RG_ring_4_974158 00:06:00.593 size: 1.000366 MiB name: RG_ring_5_974158 00:06:00.593 size: 0.125366 MiB name: RG_ring_2_974158 00:06:00.593 size: 0.015991 MiB name: RG_ring_3_974158 00:06:00.593 end memzones------- 00:06:00.593 12:26:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:00.593 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:00.593 list of free elements. size: 10.852478 MiB 00:06:00.593 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:00.593 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:00.594 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:00.594 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:00.594 element at address: 0x200008000000 with size: 0.959839 MiB 00:06:00.594 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:00.594 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:00.594 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:00.594 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:06:00.594 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:00.594 element at address: 0x200003e00000 with size: 0.490723 MiB 00:06:00.594 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:00.594 element at address: 0x200010600000 with size: 0.481934 MiB 00:06:00.594 element at address: 0x200028200000 with size: 0.410034 MiB 00:06:00.594 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:00.594 list of standard malloc elements. size: 199.218628 MiB 00:06:00.594 element at address: 0x2000081fff80 with size: 132.000122 MiB 00:06:00.594 element at address: 0x200003ffff80 with size: 64.000122 MiB 00:06:00.594 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:00.594 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:00.594 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:00.594 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:00.594 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:00.594 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:00.594 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:00.594 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:00.594 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:00.594 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:00.594 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:00.594 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:00.594 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:00.594 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:00.594 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:00.594 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:00.594 element at address: 0x20000085b100 with size: 0.000183 MiB 00:06:00.594 element at address: 0x2000008db3c0 with size: 0.000183 MiB 00:06:00.594 element at address: 0x2000008db5c0 with size: 0.000183 MiB 00:06:00.594 element at address: 0x2000008df880 with size: 0.000183 MiB 00:06:00.594 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:00.594 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:00.594 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:00.594 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:00.594 element at address: 0x200003e7da00 with size: 0.000183 MiB 00:06:00.594 element at address: 0x200003e7dac0 with size: 0.000183 MiB 00:06:00.594 element at address: 0x200003efdd80 with size: 0.000183 MiB 00:06:00.594 element at address: 0x2000080fdd80 with size: 0.000183 MiB 00:06:00.594 element at address: 0x20001067b600 with size: 0.000183 MiB 00:06:00.594 element at address: 0x20001067b6c0 with size: 0.000183 MiB 00:06:00.594 element at address: 0x2000106fb980 with size: 0.000183 MiB 00:06:00.594 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:00.594 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:00.594 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:00.594 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:00.594 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:00.594 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:00.594 element at address: 0x200028268f80 with size: 0.000183 MiB 00:06:00.594 element at address: 0x200028269040 with size: 0.000183 MiB 00:06:00.594 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:06:00.594 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:00.594 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:00.594 list of memzone associated elements. size: 607.928894 MiB 00:06:00.594 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:00.594 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:00.594 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:00.594 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:00.594 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:00.594 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_974158_0 00:06:00.594 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:00.594 associated memzone info: size: 48.002930 MiB name: MP_msgpool_974158_0 00:06:00.594 element at address: 0x2000107fdb80 with size: 36.008911 MiB 00:06:00.594 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_974158_0 00:06:00.594 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:00.594 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:00.594 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:00.594 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:00.594 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:00.594 associated memzone info: size: 3.000122 MiB name: MP_evtpool_974158_0 00:06:00.594 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:00.594 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_974158 00:06:00.594 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:00.594 associated memzone info: size: 1.007996 MiB name: MP_evtpool_974158 00:06:00.594 element at address: 0x2000106fba40 with size: 1.008118 MiB 00:06:00.594 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:00.594 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:00.594 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:00.594 element at address: 0x2000080fde40 with size: 1.008118 MiB 00:06:00.594 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:00.594 element at address: 0x200003efde40 with size: 1.008118 MiB 00:06:00.594 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:00.594 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:00.594 associated memzone info: size: 1.000366 MiB name: RG_ring_0_974158 00:06:00.594 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:00.594 associated memzone info: size: 1.000366 MiB name: RG_ring_1_974158 00:06:00.594 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:00.594 associated memzone info: size: 1.000366 MiB name: RG_ring_4_974158 00:06:00.594 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:00.594 associated memzone info: size: 1.000366 MiB name: RG_ring_5_974158 00:06:00.594 element at address: 0x20000085b1c0 with size: 0.500488 MiB 00:06:00.594 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_974158 00:06:00.594 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:00.594 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_974158 00:06:00.594 element at address: 0x20001067b780 with size: 0.500488 MiB 00:06:00.594 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:00.594 element at address: 0x200003e7db80 with size: 0.500488 MiB 00:06:00.594 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:00.594 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:00.594 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:00.594 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:00.594 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_974158 00:06:00.594 element at address: 0x2000008df940 with size: 0.125488 MiB 00:06:00.594 associated memzone info: size: 0.125366 MiB name: RG_ring_2_974158 00:06:00.594 element at address: 0x2000080f5b80 with size: 0.031738 MiB 00:06:00.594 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:00.594 element at address: 0x200028269100 with size: 0.023743 MiB 00:06:00.594 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:00.594 element at address: 0x2000008db680 with size: 0.016113 MiB 00:06:00.594 associated memzone info: size: 0.015991 MiB name: RG_ring_3_974158 00:06:00.594 element at address: 0x20002826f240 with size: 0.002441 MiB 00:06:00.594 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:00.594 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:00.594 associated memzone info: size: 0.000183 MiB name: MP_msgpool_974158 00:06:00.594 element at address: 0x2000008db480 with size: 0.000305 MiB 00:06:00.594 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_974158 00:06:00.594 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:00.594 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_974158 00:06:00.594 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:06:00.594 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:00.594 12:26:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:00.594 12:26:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 974158 00:06:00.594 12:26:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 974158 ']' 00:06:00.594 12:26:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 974158 00:06:00.594 12:26:06 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:00.594 12:26:06 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.594 12:26:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 974158 00:06:00.594 12:26:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.594 12:26:06 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.594 12:26:06 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 974158' 00:06:00.594 killing process with pid 974158 00:06:00.594 12:26:06 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 974158 00:06:00.594 12:26:06 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 974158 00:06:01.166 00:06:01.166 real 0m0.989s 00:06:01.166 user 0m0.908s 00:06:01.166 sys 0m0.427s 00:06:01.166 12:26:06 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.166 12:26:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.166 ************************************ 00:06:01.166 END TEST dpdk_mem_utility 00:06:01.166 ************************************ 00:06:01.166 12:26:06 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:01.166 12:26:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.166 12:26:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.166 12:26:06 -- common/autotest_common.sh@10 -- # set +x 00:06:01.166 ************************************ 00:06:01.166 START TEST event 00:06:01.166 ************************************ 00:06:01.166 12:26:06 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:01.166 * Looking for test storage... 00:06:01.166 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:01.166 12:26:06 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:01.166 12:26:06 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:01.166 12:26:06 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:01.166 12:26:06 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:01.166 12:26:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.166 12:26:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.166 12:26:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.166 12:26:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.166 12:26:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.166 12:26:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.166 12:26:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.166 12:26:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.166 12:26:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.166 12:26:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.166 12:26:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.166 12:26:06 event -- scripts/common.sh@344 -- # case "$op" in 00:06:01.166 12:26:06 event -- scripts/common.sh@345 -- # : 1 00:06:01.166 12:26:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.166 12:26:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.166 12:26:06 event -- scripts/common.sh@365 -- # decimal 1 00:06:01.166 12:26:06 event -- scripts/common.sh@353 -- # local d=1 00:06:01.166 12:26:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.166 12:26:06 event -- scripts/common.sh@355 -- # echo 1 00:06:01.166 12:26:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.166 12:26:06 event -- scripts/common.sh@366 -- # decimal 2 00:06:01.166 12:26:06 event -- scripts/common.sh@353 -- # local d=2 00:06:01.166 12:26:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.166 12:26:06 event -- scripts/common.sh@355 -- # echo 2 00:06:01.166 12:26:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.166 12:26:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.166 12:26:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.166 12:26:06 event -- scripts/common.sh@368 -- # return 0 00:06:01.166 12:26:06 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.166 12:26:06 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:01.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.166 --rc genhtml_branch_coverage=1 00:06:01.166 --rc genhtml_function_coverage=1 00:06:01.166 --rc genhtml_legend=1 00:06:01.166 --rc geninfo_all_blocks=1 00:06:01.166 --rc geninfo_unexecuted_blocks=1 00:06:01.166 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:01.166 ' 00:06:01.166 12:26:06 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:01.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.166 --rc genhtml_branch_coverage=1 00:06:01.166 --rc genhtml_function_coverage=1 00:06:01.166 --rc genhtml_legend=1 00:06:01.166 --rc geninfo_all_blocks=1 00:06:01.166 --rc geninfo_unexecuted_blocks=1 00:06:01.166 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:01.166 ' 00:06:01.166 12:26:06 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:01.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.166 --rc genhtml_branch_coverage=1 00:06:01.166 --rc genhtml_function_coverage=1 00:06:01.166 --rc genhtml_legend=1 00:06:01.166 --rc geninfo_all_blocks=1 00:06:01.166 --rc geninfo_unexecuted_blocks=1 00:06:01.166 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:01.166 ' 00:06:01.166 12:26:06 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:01.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.166 --rc genhtml_branch_coverage=1 00:06:01.166 --rc genhtml_function_coverage=1 00:06:01.166 --rc genhtml_legend=1 00:06:01.166 --rc geninfo_all_blocks=1 00:06:01.166 --rc geninfo_unexecuted_blocks=1 00:06:01.166 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:01.166 ' 00:06:01.166 12:26:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:01.166 12:26:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:01.166 12:26:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:01.166 12:26:06 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:01.166 12:26:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.166 12:26:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.427 ************************************ 00:06:01.427 START TEST event_perf 00:06:01.427 ************************************ 00:06:01.427 12:26:06 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:01.427 Running I/O for 1 seconds...[2024-12-16 12:26:06.759505] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:01.427 [2024-12-16 12:26:06.759608] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid974411 ] 00:06:01.427 [2024-12-16 12:26:06.836010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.427 [2024-12-16 12:26:06.879370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.427 [2024-12-16 12:26:06.879467] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.427 [2024-12-16 12:26:06.879548] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.427 [2024-12-16 12:26:06.879550] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.366 Running I/O for 1 seconds... 00:06:02.366 lcore 0: 187103 00:06:02.366 lcore 1: 187103 00:06:02.366 lcore 2: 187103 00:06:02.366 lcore 3: 187105 00:06:02.366 done. 00:06:02.367 00:06:02.367 real 0m1.176s 00:06:02.367 user 0m4.088s 00:06:02.367 sys 0m0.086s 00:06:02.367 12:26:07 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.367 12:26:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.367 ************************************ 00:06:02.367 END TEST event_perf 00:06:02.367 ************************************ 00:06:02.627 12:26:07 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:02.627 12:26:07 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:02.627 12:26:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.627 12:26:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.627 ************************************ 00:06:02.627 START TEST event_reactor 00:06:02.627 ************************************ 00:06:02.627 12:26:07 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:02.627 [2024-12-16 12:26:08.003557] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:02.627 [2024-12-16 12:26:08.003620] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid974560 ] 00:06:02.627 [2024-12-16 12:26:08.069972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.627 [2024-12-16 12:26:08.108642] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.010 test_start 00:06:04.010 oneshot 00:06:04.010 tick 100 00:06:04.010 tick 100 00:06:04.010 tick 250 00:06:04.010 tick 100 00:06:04.010 tick 100 00:06:04.010 tick 100 00:06:04.010 tick 250 00:06:04.010 tick 500 00:06:04.010 tick 100 00:06:04.010 tick 100 00:06:04.010 tick 250 00:06:04.010 tick 100 00:06:04.010 tick 100 00:06:04.010 test_end 00:06:04.010 00:06:04.010 real 0m1.148s 00:06:04.010 user 0m1.079s 00:06:04.010 sys 0m0.066s 00:06:04.010 12:26:09 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.010 12:26:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:04.010 ************************************ 00:06:04.010 END TEST event_reactor 00:06:04.010 ************************************ 00:06:04.010 12:26:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:04.010 12:26:09 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:04.010 12:26:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.010 12:26:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.010 ************************************ 00:06:04.010 START TEST event_reactor_perf 00:06:04.010 ************************************ 00:06:04.010 12:26:09 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:04.010 [2024-12-16 12:26:09.235283] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:04.010 [2024-12-16 12:26:09.235363] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid974809 ] 00:06:04.010 [2024-12-16 12:26:09.306788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.010 [2024-12-16 12:26:09.345392] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.950 test_start 00:06:04.950 test_end 00:06:04.950 Performance: 936904 events per second 00:06:04.950 00:06:04.950 real 0m1.166s 00:06:04.950 user 0m1.092s 00:06:04.950 sys 0m0.070s 00:06:04.950 12:26:10 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.950 12:26:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.950 ************************************ 00:06:04.950 END TEST event_reactor_perf 00:06:04.950 ************************************ 00:06:04.950 12:26:10 event -- event/event.sh@49 -- # uname -s 00:06:04.950 12:26:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:04.950 12:26:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:04.950 12:26:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.950 12:26:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.950 12:26:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.950 ************************************ 00:06:04.950 START TEST event_scheduler 00:06:04.950 ************************************ 00:06:04.950 12:26:10 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:05.211 * Looking for test storage... 00:06:05.211 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:06:05.211 12:26:10 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:05.211 12:26:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:05.211 12:26:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:05.211 12:26:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.211 12:26:10 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:05.211 12:26:10 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.211 12:26:10 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:05.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.211 --rc genhtml_branch_coverage=1 00:06:05.211 --rc genhtml_function_coverage=1 00:06:05.211 --rc genhtml_legend=1 00:06:05.211 --rc geninfo_all_blocks=1 00:06:05.211 --rc geninfo_unexecuted_blocks=1 00:06:05.211 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:05.211 ' 00:06:05.211 12:26:10 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:05.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.211 --rc genhtml_branch_coverage=1 00:06:05.211 --rc genhtml_function_coverage=1 00:06:05.211 --rc genhtml_legend=1 00:06:05.211 --rc geninfo_all_blocks=1 00:06:05.211 --rc geninfo_unexecuted_blocks=1 00:06:05.211 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:05.211 ' 00:06:05.211 12:26:10 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:05.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.211 --rc genhtml_branch_coverage=1 00:06:05.211 --rc genhtml_function_coverage=1 00:06:05.211 --rc genhtml_legend=1 00:06:05.211 --rc geninfo_all_blocks=1 00:06:05.211 --rc geninfo_unexecuted_blocks=1 00:06:05.211 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:05.211 ' 00:06:05.211 12:26:10 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:05.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.211 --rc genhtml_branch_coverage=1 00:06:05.211 --rc genhtml_function_coverage=1 00:06:05.211 --rc genhtml_legend=1 00:06:05.211 --rc geninfo_all_blocks=1 00:06:05.211 --rc geninfo_unexecuted_blocks=1 00:06:05.211 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:05.211 ' 00:06:05.211 12:26:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:05.211 12:26:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=975125 00:06:05.211 12:26:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.211 12:26:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 975125 00:06:05.211 12:26:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:05.211 12:26:10 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 975125 ']' 00:06:05.211 12:26:10 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.211 12:26:10 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.211 12:26:10 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.211 12:26:10 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.211 12:26:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.211 [2024-12-16 12:26:10.669520] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:05.211 [2024-12-16 12:26:10.669575] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid975125 ] 00:06:05.211 [2024-12-16 12:26:10.734090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:05.472 [2024-12-16 12:26:10.781758] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.472 [2024-12-16 12:26:10.781842] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.472 [2024-12-16 12:26:10.781930] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.472 [2024-12-16 12:26:10.781932] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.472 12:26:10 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.472 12:26:10 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:05.472 12:26:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:05.472 12:26:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.472 12:26:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.472 [2024-12-16 12:26:10.850584] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:05.472 [2024-12-16 12:26:10.850605] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:05.472 [2024-12-16 12:26:10.850620] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:05.472 [2024-12-16 12:26:10.850628] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:05.472 [2024-12-16 12:26:10.850636] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:05.472 12:26:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.472 12:26:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:05.472 12:26:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.472 12:26:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.472 [2024-12-16 12:26:10.924854] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:05.472 12:26:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.472 12:26:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:05.472 12:26:10 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.472 12:26:10 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.472 12:26:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.472 ************************************ 00:06:05.472 START TEST scheduler_create_thread 00:06:05.472 ************************************ 00:06:05.472 12:26:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:05.472 12:26:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:05.472 12:26:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.472 12:26:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.472 2 00:06:05.472 12:26:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.472 12:26:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:05.472 12:26:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.472 12:26:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.472 3 00:06:05.472 12:26:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.472 12:26:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:05.472 12:26:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.472 12:26:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.472 4 00:06:05.472 12:26:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.472 12:26:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:05.472 12:26:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.473 12:26:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.473 5 00:06:05.473 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.473 12:26:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:05.473 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.473 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.473 6 00:06:05.473 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.473 12:26:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:05.473 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.473 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.473 7 00:06:05.473 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.473 12:26:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:05.473 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.473 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.473 8 00:06:05.473 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.473 12:26:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:05.473 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.473 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.738 9 00:06:05.738 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.738 12:26:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:05.738 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.738 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.738 10 00:06:05.738 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.738 12:26:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:05.738 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.738 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.738 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.738 12:26:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:05.738 12:26:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:05.739 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.739 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.739 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.739 12:26:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:05.739 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.739 12:26:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.122 12:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.122 12:26:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:07.122 12:26:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:07.122 12:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.122 12:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.061 12:26:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.061 00:06:08.061 real 0m2.619s 00:06:08.061 user 0m0.024s 00:06:08.061 sys 0m0.007s 00:06:08.061 12:26:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.061 12:26:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.061 ************************************ 00:06:08.061 END TEST scheduler_create_thread 00:06:08.061 ************************************ 00:06:08.321 12:26:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:08.321 12:26:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 975125 00:06:08.321 12:26:13 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 975125 ']' 00:06:08.321 12:26:13 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 975125 00:06:08.321 12:26:13 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:08.321 12:26:13 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.321 12:26:13 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 975125 00:06:08.321 12:26:13 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:08.321 12:26:13 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:08.321 12:26:13 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 975125' 00:06:08.321 killing process with pid 975125 00:06:08.321 12:26:13 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 975125 00:06:08.321 12:26:13 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 975125 00:06:08.581 [2024-12-16 12:26:14.066946] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:08.840 00:06:08.840 real 0m3.762s 00:06:08.840 user 0m5.661s 00:06:08.840 sys 0m0.425s 00:06:08.840 12:26:14 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.840 12:26:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.840 ************************************ 00:06:08.840 END TEST event_scheduler 00:06:08.840 ************************************ 00:06:08.840 12:26:14 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:08.840 12:26:14 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:08.840 12:26:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.840 12:26:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.840 12:26:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.840 ************************************ 00:06:08.840 START TEST app_repeat 00:06:08.840 ************************************ 00:06:08.840 12:26:14 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:08.840 12:26:14 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.841 12:26:14 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.841 12:26:14 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:08.841 12:26:14 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.841 12:26:14 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:08.841 12:26:14 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:08.841 12:26:14 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:08.841 12:26:14 event.app_repeat -- event/event.sh@19 -- # repeat_pid=975842 00:06:08.841 12:26:14 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.841 12:26:14 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:08.841 12:26:14 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 975842' 00:06:08.841 Process app_repeat pid: 975842 00:06:08.841 12:26:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:08.841 12:26:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:08.841 spdk_app_start Round 0 00:06:08.841 12:26:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 975842 /var/tmp/spdk-nbd.sock 00:06:08.841 12:26:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 975842 ']' 00:06:08.841 12:26:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.841 12:26:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.841 12:26:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.841 12:26:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.841 12:26:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.841 [2024-12-16 12:26:14.351332] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:08.841 [2024-12-16 12:26:14.351406] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid975842 ] 00:06:09.100 [2024-12-16 12:26:14.425288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.100 [2024-12-16 12:26:14.469195] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.100 [2024-12-16 12:26:14.469198] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.100 12:26:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.100 12:26:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:09.100 12:26:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.361 Malloc0 00:06:09.361 12:26:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.621 Malloc1 00:06:09.621 12:26:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.621 12:26:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.621 12:26:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.621 12:26:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:09.621 12:26:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.621 12:26:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:09.621 12:26:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.621 12:26:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.621 12:26:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.621 12:26:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:09.621 12:26:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.621 12:26:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:09.621 12:26:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:09.621 12:26:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:09.621 12:26:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.621 12:26:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:09.621 /dev/nbd0 00:06:09.621 12:26:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:09.621 12:26:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:09.621 12:26:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:09.622 12:26:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:09.622 12:26:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:09.622 12:26:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:09.622 12:26:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.881 1+0 records in 00:06:09.881 1+0 records out 00:06:09.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000121096 s, 33.8 MB/s 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:09.881 12:26:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.881 12:26:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.881 12:26:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.881 /dev/nbd1 00:06:09.881 12:26:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.881 12:26:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.881 1+0 records in 00:06:09.881 1+0 records out 00:06:09.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018821 s, 21.8 MB/s 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:09.881 12:26:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:09.881 12:26:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.881 12:26:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.881 12:26:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.881 12:26:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.881 12:26:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.141 { 00:06:10.141 "nbd_device": "/dev/nbd0", 00:06:10.141 "bdev_name": "Malloc0" 00:06:10.141 }, 00:06:10.141 { 00:06:10.141 "nbd_device": "/dev/nbd1", 00:06:10.141 "bdev_name": "Malloc1" 00:06:10.141 } 00:06:10.141 ]' 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.141 { 00:06:10.141 "nbd_device": "/dev/nbd0", 00:06:10.141 "bdev_name": "Malloc0" 00:06:10.141 }, 00:06:10.141 { 00:06:10.141 "nbd_device": "/dev/nbd1", 00:06:10.141 "bdev_name": "Malloc1" 00:06:10.141 } 00:06:10.141 ]' 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.141 /dev/nbd1' 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.141 /dev/nbd1' 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:10.141 256+0 records in 00:06:10.141 256+0 records out 00:06:10.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116397 s, 90.1 MB/s 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:10.141 256+0 records in 00:06:10.141 256+0 records out 00:06:10.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199679 s, 52.5 MB/s 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.141 12:26:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:10.401 256+0 records in 00:06:10.401 256+0 records out 00:06:10.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211407 s, 49.6 MB/s 00:06:10.401 12:26:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:10.401 12:26:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.401 12:26:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.401 12:26:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:10.401 12:26:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.402 12:26:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.707 12:26:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.707 12:26:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.707 12:26:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.707 12:26:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.707 12:26:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.707 12:26:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.707 12:26:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.707 12:26:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.707 12:26:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.707 12:26:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.707 12:26:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.099 12:26:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.099 12:26:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.099 12:26:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.099 12:26:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.099 12:26:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.099 12:26:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.099 12:26:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:11.099 12:26:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.099 12:26:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.099 12:26:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.099 12:26:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.099 12:26:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.099 12:26:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.099 12:26:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:11.359 [2024-12-16 12:26:16.755084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.359 [2024-12-16 12:26:16.791643] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.359 [2024-12-16 12:26:16.791644] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.359 [2024-12-16 12:26:16.832864] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:11.359 [2024-12-16 12:26:16.832907] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.647 12:26:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:14.647 12:26:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:14.647 spdk_app_start Round 1 00:06:14.647 12:26:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 975842 /var/tmp/spdk-nbd.sock 00:06:14.647 12:26:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 975842 ']' 00:06:14.647 12:26:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.647 12:26:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.647 12:26:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.647 12:26:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.647 12:26:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.647 12:26:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.647 12:26:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:14.647 12:26:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.647 Malloc0 00:06:14.647 12:26:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.647 Malloc1 00:06:14.647 12:26:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.647 12:26:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.647 12:26:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.647 12:26:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.647 12:26:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.647 12:26:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.647 12:26:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.647 12:26:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.647 12:26:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.647 12:26:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.647 12:26:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.647 12:26:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.647 12:26:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:14.647 12:26:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.647 12:26:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.647 12:26:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.906 /dev/nbd0 00:06:14.906 12:26:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.906 12:26:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.906 12:26:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:14.906 12:26:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:14.906 12:26:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.906 12:26:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.906 12:26:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:14.906 12:26:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:14.906 12:26:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.906 12:26:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.906 12:26:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.906 1+0 records in 00:06:14.906 1+0 records out 00:06:14.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215969 s, 19.0 MB/s 00:06:14.906 12:26:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:14.906 12:26:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:14.906 12:26:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:14.906 12:26:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.906 12:26:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:14.906 12:26:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.906 12:26:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.906 12:26:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.166 /dev/nbd1 00:06:15.166 12:26:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.166 12:26:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.166 12:26:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:15.166 12:26:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:15.166 12:26:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.166 12:26:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.166 12:26:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:15.166 12:26:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:15.166 12:26:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.166 12:26:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.166 12:26:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.166 1+0 records in 00:06:15.166 1+0 records out 00:06:15.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231157 s, 17.7 MB/s 00:06:15.166 12:26:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:15.166 12:26:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:15.166 12:26:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:15.166 12:26:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.166 12:26:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:15.166 12:26:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.166 12:26:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.166 12:26:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.166 12:26:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.166 12:26:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:15.425 { 00:06:15.425 "nbd_device": "/dev/nbd0", 00:06:15.425 "bdev_name": "Malloc0" 00:06:15.425 }, 00:06:15.425 { 00:06:15.425 "nbd_device": "/dev/nbd1", 00:06:15.425 "bdev_name": "Malloc1" 00:06:15.425 } 00:06:15.425 ]' 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:15.425 { 00:06:15.425 "nbd_device": "/dev/nbd0", 00:06:15.425 "bdev_name": "Malloc0" 00:06:15.425 }, 00:06:15.425 { 00:06:15.425 "nbd_device": "/dev/nbd1", 00:06:15.425 "bdev_name": "Malloc1" 00:06:15.425 } 00:06:15.425 ]' 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:15.425 /dev/nbd1' 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:15.425 /dev/nbd1' 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:15.425 256+0 records in 00:06:15.425 256+0 records out 00:06:15.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111519 s, 94.0 MB/s 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:15.425 256+0 records in 00:06:15.425 256+0 records out 00:06:15.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201871 s, 51.9 MB/s 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.425 12:26:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:15.425 256+0 records in 00:06:15.426 256+0 records out 00:06:15.426 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214081 s, 49.0 MB/s 00:06:15.426 12:26:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:15.426 12:26:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.426 12:26:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.426 12:26:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.426 12:26:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.426 12:26:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.426 12:26:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.426 12:26:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.426 12:26:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:15.685 12:26:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.685 12:26:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:15.685 12:26:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.685 12:26:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:15.685 12:26:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.685 12:26:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.685 12:26:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.685 12:26:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:15.685 12:26:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.685 12:26:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.685 12:26:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.685 12:26:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.685 12:26:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.685 12:26:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.685 12:26:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.685 12:26:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.685 12:26:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.685 12:26:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.685 12:26:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.685 12:26:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.944 12:26:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.944 12:26:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.944 12:26:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.944 12:26:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.944 12:26:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.944 12:26:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.944 12:26:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.944 12:26:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.944 12:26:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.944 12:26:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.944 12:26:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.203 12:26:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.203 12:26:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.203 12:26:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.203 12:26:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.203 12:26:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.203 12:26:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.203 12:26:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:16.203 12:26:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.203 12:26:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.203 12:26:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.203 12:26:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.203 12:26:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.203 12:26:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.462 12:26:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:16.721 [2024-12-16 12:26:22.033809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.721 [2024-12-16 12:26:22.072565] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.721 [2024-12-16 12:26:22.072567] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.721 [2024-12-16 12:26:22.114293] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.721 [2024-12-16 12:26:22.114334] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.008 12:26:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.008 12:26:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:20.008 spdk_app_start Round 2 00:06:20.008 12:26:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 975842 /var/tmp/spdk-nbd.sock 00:06:20.008 12:26:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 975842 ']' 00:06:20.008 12:26:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.008 12:26:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.008 12:26:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.008 12:26:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.008 12:26:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.008 12:26:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.008 12:26:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:20.008 12:26:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.008 Malloc0 00:06:20.008 12:26:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.009 Malloc1 00:06:20.009 12:26:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.009 12:26:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.009 12:26:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.009 12:26:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.009 12:26:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.009 12:26:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.009 12:26:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.009 12:26:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.009 12:26:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.009 12:26:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.009 12:26:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.009 12:26:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.009 12:26:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:20.009 12:26:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.009 12:26:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.009 12:26:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:20.268 /dev/nbd0 00:06:20.268 12:26:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:20.268 12:26:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:20.268 12:26:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:20.268 12:26:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:20.268 12:26:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:20.268 12:26:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:20.268 12:26:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:20.268 12:26:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:20.268 12:26:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:20.268 12:26:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:20.268 12:26:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.268 1+0 records in 00:06:20.268 1+0 records out 00:06:20.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181548 s, 22.6 MB/s 00:06:20.268 12:26:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:20.268 12:26:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:20.268 12:26:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:20.268 12:26:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:20.268 12:26:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:20.268 12:26:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.268 12:26:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.268 12:26:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:20.527 /dev/nbd1 00:06:20.527 12:26:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:20.527 12:26:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:20.527 12:26:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:20.527 12:26:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:20.527 12:26:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:20.527 12:26:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:20.527 12:26:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:20.527 12:26:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:20.527 12:26:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:20.527 12:26:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:20.527 12:26:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.527 1+0 records in 00:06:20.527 1+0 records out 00:06:20.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254494 s, 16.1 MB/s 00:06:20.527 12:26:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:20.527 12:26:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:20.527 12:26:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:20.527 12:26:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:20.527 12:26:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:20.527 12:26:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.527 12:26:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.527 12:26:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.527 12:26:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.527 12:26:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.786 12:26:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.786 { 00:06:20.786 "nbd_device": "/dev/nbd0", 00:06:20.786 "bdev_name": "Malloc0" 00:06:20.786 }, 00:06:20.786 { 00:06:20.786 "nbd_device": "/dev/nbd1", 00:06:20.786 "bdev_name": "Malloc1" 00:06:20.786 } 00:06:20.787 ]' 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.787 { 00:06:20.787 "nbd_device": "/dev/nbd0", 00:06:20.787 "bdev_name": "Malloc0" 00:06:20.787 }, 00:06:20.787 { 00:06:20.787 "nbd_device": "/dev/nbd1", 00:06:20.787 "bdev_name": "Malloc1" 00:06:20.787 } 00:06:20.787 ]' 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.787 /dev/nbd1' 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.787 /dev/nbd1' 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:20.787 256+0 records in 00:06:20.787 256+0 records out 00:06:20.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106453 s, 98.5 MB/s 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.787 256+0 records in 00:06:20.787 256+0 records out 00:06:20.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200737 s, 52.2 MB/s 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.787 256+0 records in 00:06:20.787 256+0 records out 00:06:20.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214566 s, 48.9 MB/s 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.787 12:26:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.046 12:26:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.046 12:26:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.046 12:26:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.046 12:26:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.046 12:26:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.046 12:26:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.046 12:26:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.046 12:26:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.046 12:26:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.046 12:26:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.305 12:26:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.305 12:26:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.305 12:26:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.305 12:26:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.305 12:26:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.305 12:26:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.305 12:26:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.305 12:26:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.305 12:26:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.305 12:26:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.305 12:26:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.565 12:26:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.565 12:26:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.565 12:26:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.565 12:26:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.565 12:26:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.565 12:26:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.565 12:26:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:21.565 12:26:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.565 12:26:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.565 12:26:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.565 12:26:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.565 12:26:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.565 12:26:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.824 12:26:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:21.824 [2024-12-16 12:26:27.307485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.824 [2024-12-16 12:26:27.344021] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.824 [2024-12-16 12:26:27.344024] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.824 [2024-12-16 12:26:27.385549] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.824 [2024-12-16 12:26:27.385590] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.114 12:26:30 event.app_repeat -- event/event.sh@38 -- # waitforlisten 975842 /var/tmp/spdk-nbd.sock 00:06:25.114 12:26:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 975842 ']' 00:06:25.114 12:26:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.114 12:26:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.114 12:26:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.114 12:26:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.114 12:26:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.114 12:26:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.114 12:26:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:25.114 12:26:30 event.app_repeat -- event/event.sh@39 -- # killprocess 975842 00:06:25.114 12:26:30 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 975842 ']' 00:06:25.114 12:26:30 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 975842 00:06:25.114 12:26:30 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:25.114 12:26:30 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.114 12:26:30 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 975842 00:06:25.114 12:26:30 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.114 12:26:30 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.114 12:26:30 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 975842' 00:06:25.114 killing process with pid 975842 00:06:25.114 12:26:30 event.app_repeat -- common/autotest_common.sh@973 -- # kill 975842 00:06:25.114 12:26:30 event.app_repeat -- common/autotest_common.sh@978 -- # wait 975842 00:06:25.115 spdk_app_start is called in Round 0. 00:06:25.115 Shutdown signal received, stop current app iteration 00:06:25.115 Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 reinitialization... 00:06:25.115 spdk_app_start is called in Round 1. 00:06:25.115 Shutdown signal received, stop current app iteration 00:06:25.115 Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 reinitialization... 00:06:25.115 spdk_app_start is called in Round 2. 00:06:25.115 Shutdown signal received, stop current app iteration 00:06:25.115 Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 reinitialization... 00:06:25.115 spdk_app_start is called in Round 3. 00:06:25.115 Shutdown signal received, stop current app iteration 00:06:25.115 12:26:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:25.115 12:26:30 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:25.115 00:06:25.115 real 0m16.220s 00:06:25.115 user 0m34.964s 00:06:25.115 sys 0m3.086s 00:06:25.115 12:26:30 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.115 12:26:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.115 ************************************ 00:06:25.115 END TEST app_repeat 00:06:25.115 ************************************ 00:06:25.115 12:26:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:25.115 12:26:30 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:25.115 12:26:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.115 12:26:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.115 12:26:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.115 ************************************ 00:06:25.115 START TEST cpu_locks 00:06:25.115 ************************************ 00:06:25.115 12:26:30 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:25.376 * Looking for test storage... 00:06:25.376 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:25.377 12:26:30 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:25.377 12:26:30 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:25.377 12:26:30 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:25.377 12:26:30 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.377 12:26:30 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:25.377 12:26:30 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.377 12:26:30 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:25.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.377 --rc genhtml_branch_coverage=1 00:06:25.377 --rc genhtml_function_coverage=1 00:06:25.377 --rc genhtml_legend=1 00:06:25.377 --rc geninfo_all_blocks=1 00:06:25.377 --rc geninfo_unexecuted_blocks=1 00:06:25.377 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:25.377 ' 00:06:25.377 12:26:30 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:25.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.377 --rc genhtml_branch_coverage=1 00:06:25.377 --rc genhtml_function_coverage=1 00:06:25.377 --rc genhtml_legend=1 00:06:25.377 --rc geninfo_all_blocks=1 00:06:25.377 --rc geninfo_unexecuted_blocks=1 00:06:25.377 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:25.377 ' 00:06:25.377 12:26:30 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:25.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.377 --rc genhtml_branch_coverage=1 00:06:25.377 --rc genhtml_function_coverage=1 00:06:25.377 --rc genhtml_legend=1 00:06:25.377 --rc geninfo_all_blocks=1 00:06:25.377 --rc geninfo_unexecuted_blocks=1 00:06:25.377 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:25.377 ' 00:06:25.377 12:26:30 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:25.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.377 --rc genhtml_branch_coverage=1 00:06:25.377 --rc genhtml_function_coverage=1 00:06:25.377 --rc genhtml_legend=1 00:06:25.377 --rc geninfo_all_blocks=1 00:06:25.377 --rc geninfo_unexecuted_blocks=1 00:06:25.377 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:25.377 ' 00:06:25.377 12:26:30 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:25.377 12:26:30 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:25.377 12:26:30 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:25.377 12:26:30 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:25.377 12:26:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.377 12:26:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.377 12:26:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.377 ************************************ 00:06:25.377 START TEST default_locks 00:06:25.377 ************************************ 00:06:25.377 12:26:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:25.377 12:26:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=978887 00:06:25.377 12:26:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 978887 00:06:25.377 12:26:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.377 12:26:30 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 978887 ']' 00:06:25.377 12:26:30 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.377 12:26:30 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.377 12:26:30 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.377 12:26:30 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.377 12:26:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.377 [2024-12-16 12:26:30.882813] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:25.377 [2024-12-16 12:26:30.882869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid978887 ] 00:06:25.637 [2024-12-16 12:26:30.954940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.637 [2024-12-16 12:26:30.994571] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.896 12:26:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.896 12:26:31 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:25.896 12:26:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 978887 00:06:25.896 12:26:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 978887 00:06:25.896 12:26:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.465 lslocks: write error 00:06:26.465 12:26:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 978887 00:06:26.465 12:26:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 978887 ']' 00:06:26.465 12:26:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 978887 00:06:26.465 12:26:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:26.465 12:26:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.465 12:26:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 978887 00:06:26.465 12:26:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.465 12:26:31 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.465 12:26:31 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 978887' 00:06:26.465 killing process with pid 978887 00:06:26.465 12:26:31 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 978887 00:06:26.465 12:26:31 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 978887 00:06:26.724 12:26:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 978887 00:06:26.724 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 978887 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 978887 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 978887 ']' 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.725 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (978887) - No such process 00:06:26.725 ERROR: process (pid: 978887) is no longer running 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:26.725 00:06:26.725 real 0m1.360s 00:06:26.725 user 0m1.331s 00:06:26.725 sys 0m0.647s 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.725 12:26:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.725 ************************************ 00:06:26.725 END TEST default_locks 00:06:26.725 ************************************ 00:06:26.725 12:26:32 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:26.725 12:26:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.725 12:26:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.725 12:26:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.984 ************************************ 00:06:26.984 START TEST default_locks_via_rpc 00:06:26.984 ************************************ 00:06:26.984 12:26:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:26.984 12:26:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.984 12:26:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=979179 00:06:26.984 12:26:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 979179 00:06:26.984 12:26:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 979179 ']' 00:06:26.984 12:26:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.984 12:26:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.984 12:26:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.984 12:26:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.984 12:26:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.984 [2024-12-16 12:26:32.314934] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:26.984 [2024-12-16 12:26:32.314994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid979179 ] 00:06:26.984 [2024-12-16 12:26:32.383139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.984 [2024-12-16 12:26:32.419971] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.244 12:26:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.244 12:26:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:27.244 12:26:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:27.244 12:26:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.244 12:26:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.244 12:26:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.244 12:26:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:27.244 12:26:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:27.244 12:26:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:27.244 12:26:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:27.244 12:26:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:27.244 12:26:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.244 12:26:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.244 12:26:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.244 12:26:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 979179 00:06:27.244 12:26:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 979179 00:06:27.244 12:26:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.813 12:26:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 979179 00:06:27.813 12:26:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 979179 ']' 00:06:27.813 12:26:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 979179 00:06:27.813 12:26:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:27.813 12:26:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.813 12:26:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 979179 00:06:27.813 12:26:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.813 12:26:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.813 12:26:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 979179' 00:06:27.813 killing process with pid 979179 00:06:27.813 12:26:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 979179 00:06:27.813 12:26:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 979179 00:06:28.072 00:06:28.072 real 0m1.278s 00:06:28.072 user 0m1.247s 00:06:28.072 sys 0m0.607s 00:06:28.072 12:26:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.072 12:26:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.072 ************************************ 00:06:28.072 END TEST default_locks_via_rpc 00:06:28.072 ************************************ 00:06:28.072 12:26:33 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:28.072 12:26:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.072 12:26:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.072 12:26:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.332 ************************************ 00:06:28.332 START TEST non_locking_app_on_locked_coremask 00:06:28.332 ************************************ 00:06:28.332 12:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:28.332 12:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=979474 00:06:28.332 12:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 979474 /var/tmp/spdk.sock 00:06:28.332 12:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.332 12:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 979474 ']' 00:06:28.332 12:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.332 12:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.332 12:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.332 12:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.332 12:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.332 [2024-12-16 12:26:33.687767] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:28.332 [2024-12-16 12:26:33.687830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid979474 ] 00:06:28.332 [2024-12-16 12:26:33.757895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.332 [2024-12-16 12:26:33.801309] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.591 12:26:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.591 12:26:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:28.591 12:26:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=979482 00:06:28.591 12:26:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 979482 /var/tmp/spdk2.sock 00:06:28.591 12:26:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:28.591 12:26:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 979482 ']' 00:06:28.591 12:26:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.591 12:26:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.591 12:26:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.592 12:26:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.592 12:26:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.592 [2024-12-16 12:26:34.040139] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:28.592 [2024-12-16 12:26:34.040202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid979482 ] 00:06:28.592 [2024-12-16 12:26:34.137645] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.592 [2024-12-16 12:26:34.137668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.851 [2024-12-16 12:26:34.218478] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.419 12:26:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.419 12:26:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:29.419 12:26:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 979474 00:06:29.419 12:26:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 979474 00:06:29.419 12:26:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.795 lslocks: write error 00:06:30.795 12:26:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 979474 00:06:30.795 12:26:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 979474 ']' 00:06:30.795 12:26:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 979474 00:06:30.795 12:26:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:30.795 12:26:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.795 12:26:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 979474 00:06:30.795 12:26:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.795 12:26:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.795 12:26:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 979474' 00:06:30.795 killing process with pid 979474 00:06:30.795 12:26:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 979474 00:06:30.795 12:26:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 979474 00:06:31.364 12:26:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 979482 00:06:31.364 12:26:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 979482 ']' 00:06:31.364 12:26:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 979482 00:06:31.364 12:26:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:31.364 12:26:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.364 12:26:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 979482 00:06:31.364 12:26:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.364 12:26:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.364 12:26:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 979482' 00:06:31.364 killing process with pid 979482 00:06:31.364 12:26:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 979482 00:06:31.364 12:26:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 979482 00:06:31.624 00:06:31.624 real 0m3.355s 00:06:31.624 user 0m3.524s 00:06:31.624 sys 0m1.239s 00:06:31.624 12:26:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.624 12:26:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.624 ************************************ 00:06:31.624 END TEST non_locking_app_on_locked_coremask 00:06:31.624 ************************************ 00:06:31.624 12:26:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:31.624 12:26:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.624 12:26:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.624 12:26:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.624 ************************************ 00:06:31.624 START TEST locking_app_on_unlocked_coremask 00:06:31.624 ************************************ 00:06:31.624 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:31.624 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=980050 00:06:31.624 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 980050 /var/tmp/spdk.sock 00:06:31.624 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:31.624 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 980050 ']' 00:06:31.624 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.624 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.624 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.624 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.624 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.624 [2024-12-16 12:26:37.126277] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:31.625 [2024-12-16 12:26:37.126346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid980050 ] 00:06:31.884 [2024-12-16 12:26:37.198332] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.884 [2024-12-16 12:26:37.198357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.884 [2024-12-16 12:26:37.240860] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.884 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.884 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:31.884 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=980204 00:06:31.884 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 980204 /var/tmp/spdk2.sock 00:06:31.884 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:31.884 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 980204 ']' 00:06:31.884 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.144 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.144 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.144 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.144 12:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.144 [2024-12-16 12:26:37.473686] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:32.144 [2024-12-16 12:26:37.473773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid980204 ] 00:06:32.144 [2024-12-16 12:26:37.571682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.144 [2024-12-16 12:26:37.659069] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.080 12:26:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.080 12:26:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:33.080 12:26:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 980204 00:06:33.080 12:26:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 980204 00:06:33.080 12:26:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.018 lslocks: write error 00:06:34.018 12:26:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 980050 00:06:34.018 12:26:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 980050 ']' 00:06:34.018 12:26:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 980050 00:06:34.018 12:26:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:34.018 12:26:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.018 12:26:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 980050 00:06:34.018 12:26:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.018 12:26:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.018 12:26:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 980050' 00:06:34.018 killing process with pid 980050 00:06:34.018 12:26:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 980050 00:06:34.018 12:26:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 980050 00:06:34.587 12:26:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 980204 00:06:34.587 12:26:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 980204 ']' 00:06:34.587 12:26:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 980204 00:06:34.587 12:26:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:34.587 12:26:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.587 12:26:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 980204 00:06:34.587 12:26:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.587 12:26:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.587 12:26:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 980204' 00:06:34.587 killing process with pid 980204 00:06:34.587 12:26:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 980204 00:06:34.587 12:26:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 980204 00:06:34.846 00:06:34.846 real 0m3.284s 00:06:34.846 user 0m3.456s 00:06:34.846 sys 0m1.186s 00:06:34.846 12:26:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.846 12:26:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.846 ************************************ 00:06:34.846 END TEST locking_app_on_unlocked_coremask 00:06:34.846 ************************************ 00:06:35.105 12:26:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:35.105 12:26:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.105 12:26:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.105 12:26:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.105 ************************************ 00:06:35.105 START TEST locking_app_on_locked_coremask 00:06:35.105 ************************************ 00:06:35.105 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:35.105 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=980738 00:06:35.105 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 980738 /var/tmp/spdk.sock 00:06:35.105 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.106 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 980738 ']' 00:06:35.106 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.106 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.106 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.106 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.106 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.106 [2024-12-16 12:26:40.490241] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:35.106 [2024-12-16 12:26:40.490302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid980738 ] 00:06:35.106 [2024-12-16 12:26:40.559930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.106 [2024-12-16 12:26:40.603189] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.365 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.365 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:35.365 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=980882 00:06:35.365 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 980882 /var/tmp/spdk2.sock 00:06:35.365 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:35.365 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:35.365 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 980882 /var/tmp/spdk2.sock 00:06:35.365 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:35.365 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.365 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:35.365 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.365 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 980882 /var/tmp/spdk2.sock 00:06:35.365 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 980882 ']' 00:06:35.365 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.365 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.365 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.365 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.365 12:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.365 [2024-12-16 12:26:40.839958] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:35.365 [2024-12-16 12:26:40.840043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid980882 ] 00:06:35.624 [2024-12-16 12:26:40.939637] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 980738 has claimed it. 00:06:35.624 [2024-12-16 12:26:40.939675] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:36.193 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (980882) - No such process 00:06:36.193 ERROR: process (pid: 980882) is no longer running 00:06:36.193 12:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.193 12:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:36.193 12:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:36.193 12:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:36.193 12:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:36.193 12:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:36.193 12:26:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 980738 00:06:36.193 12:26:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 980738 00:06:36.193 12:26:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.453 lslocks: write error 00:06:36.453 12:26:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 980738 00:06:36.453 12:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 980738 ']' 00:06:36.453 12:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 980738 00:06:36.453 12:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:36.453 12:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.453 12:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 980738 00:06:36.453 12:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.453 12:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.453 12:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 980738' 00:06:36.453 killing process with pid 980738 00:06:36.453 12:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 980738 00:06:36.453 12:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 980738 00:06:36.780 00:06:36.780 real 0m1.661s 00:06:36.780 user 0m1.787s 00:06:36.780 sys 0m0.578s 00:06:36.780 12:26:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.780 12:26:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.780 ************************************ 00:06:36.780 END TEST locking_app_on_locked_coremask 00:06:36.780 ************************************ 00:06:36.780 12:26:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:36.780 12:26:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.780 12:26:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.780 12:26:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.780 ************************************ 00:06:36.780 START TEST locking_overlapped_coremask 00:06:36.780 ************************************ 00:06:36.780 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:36.780 12:26:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=981153 00:06:36.780 12:26:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 981153 /var/tmp/spdk.sock 00:06:36.780 12:26:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:36.780 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 981153 ']' 00:06:36.780 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.780 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.780 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.780 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.780 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.780 [2024-12-16 12:26:42.230532] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:36.780 [2024-12-16 12:26:42.230592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid981153 ] 00:06:36.780 [2024-12-16 12:26:42.300956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.039 [2024-12-16 12:26:42.344822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.039 [2024-12-16 12:26:42.344916] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.039 [2024-12-16 12:26:42.344916] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.039 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.039 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:37.039 12:26:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=981181 00:06:37.039 12:26:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 981181 /var/tmp/spdk2.sock 00:06:37.039 12:26:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:37.039 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:37.039 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 981181 /var/tmp/spdk2.sock 00:06:37.039 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:37.039 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.039 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:37.039 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.039 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 981181 /var/tmp/spdk2.sock 00:06:37.039 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 981181 ']' 00:06:37.039 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.039 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.039 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.039 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.039 12:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.039 [2024-12-16 12:26:42.582250] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:37.039 [2024-12-16 12:26:42.582307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid981181 ] 00:06:37.298 [2024-12-16 12:26:42.682723] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 981153 has claimed it. 00:06:37.298 [2024-12-16 12:26:42.682769] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:37.867 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (981181) - No such process 00:06:37.867 ERROR: process (pid: 981181) is no longer running 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 981153 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 981153 ']' 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 981153 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 981153 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 981153' 00:06:37.867 killing process with pid 981153 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 981153 00:06:37.867 12:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 981153 00:06:38.127 00:06:38.127 real 0m1.405s 00:06:38.127 user 0m3.889s 00:06:38.127 sys 0m0.424s 00:06:38.127 12:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.127 12:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.127 ************************************ 00:06:38.127 END TEST locking_overlapped_coremask 00:06:38.127 ************************************ 00:06:38.127 12:26:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:38.127 12:26:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.127 12:26:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.127 12:26:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.127 ************************************ 00:06:38.127 START TEST locking_overlapped_coremask_via_rpc 00:06:38.386 ************************************ 00:06:38.386 12:26:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:38.386 12:26:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:38.386 12:26:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=981421 00:06:38.386 12:26:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 981421 /var/tmp/spdk.sock 00:06:38.386 12:26:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 981421 ']' 00:06:38.386 12:26:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.386 12:26:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.386 12:26:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.386 12:26:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.386 12:26:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.386 [2024-12-16 12:26:43.710841] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:38.386 [2024-12-16 12:26:43.710900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid981421 ] 00:06:38.386 [2024-12-16 12:26:43.779990] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.386 [2024-12-16 12:26:43.780021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.386 [2024-12-16 12:26:43.822091] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.386 [2024-12-16 12:26:43.822187] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.386 [2024-12-16 12:26:43.822189] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.645 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.645 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:38.645 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=981486 00:06:38.645 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 981486 /var/tmp/spdk2.sock 00:06:38.645 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:38.645 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 981486 ']' 00:06:38.645 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.645 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.645 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.645 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.645 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.645 [2024-12-16 12:26:44.057291] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:38.645 [2024-12-16 12:26:44.057375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid981486 ] 00:06:38.645 [2024-12-16 12:26:44.159896] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.645 [2024-12-16 12:26:44.159928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.904 [2024-12-16 12:26:44.255663] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.904 [2024-12-16 12:26:44.255779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.904 [2024-12-16 12:26:44.255781] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.473 [2024-12-16 12:26:44.941676] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 981421 has claimed it. 00:06:39.473 request: 00:06:39.473 { 00:06:39.473 "method": "framework_enable_cpumask_locks", 00:06:39.473 "req_id": 1 00:06:39.473 } 00:06:39.473 Got JSON-RPC error response 00:06:39.473 response: 00:06:39.473 { 00:06:39.473 "code": -32603, 00:06:39.473 "message": "Failed to claim CPU core: 2" 00:06:39.473 } 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 981421 /var/tmp/spdk.sock 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 981421 ']' 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.473 12:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.731 12:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.731 12:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:39.731 12:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 981486 /var/tmp/spdk2.sock 00:06:39.731 12:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 981486 ']' 00:06:39.731 12:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.731 12:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.731 12:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.731 12:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.731 12:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.990 12:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.990 12:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:39.990 12:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:39.990 12:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:39.990 12:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:39.990 12:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:39.990 00:06:39.990 real 0m1.665s 00:06:39.990 user 0m0.779s 00:06:39.990 sys 0m0.174s 00:06:39.990 12:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.990 12:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.990 ************************************ 00:06:39.990 END TEST locking_overlapped_coremask_via_rpc 00:06:39.990 ************************************ 00:06:39.990 12:26:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:39.990 12:26:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 981421 ]] 00:06:39.990 12:26:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 981421 00:06:39.990 12:26:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 981421 ']' 00:06:39.990 12:26:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 981421 00:06:39.990 12:26:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:39.990 12:26:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.990 12:26:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 981421 00:06:39.990 12:26:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.990 12:26:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.990 12:26:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 981421' 00:06:39.990 killing process with pid 981421 00:06:39.990 12:26:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 981421 00:06:39.990 12:26:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 981421 00:06:40.249 12:26:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 981486 ]] 00:06:40.249 12:26:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 981486 00:06:40.249 12:26:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 981486 ']' 00:06:40.249 12:26:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 981486 00:06:40.249 12:26:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:40.249 12:26:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.249 12:26:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 981486 00:06:40.508 12:26:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:40.508 12:26:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:40.508 12:26:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 981486' 00:06:40.508 killing process with pid 981486 00:06:40.508 12:26:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 981486 00:06:40.508 12:26:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 981486 00:06:40.767 12:26:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:40.767 12:26:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:40.767 12:26:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 981421 ]] 00:06:40.767 12:26:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 981421 00:06:40.767 12:26:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 981421 ']' 00:06:40.767 12:26:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 981421 00:06:40.767 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (981421) - No such process 00:06:40.767 12:26:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 981421 is not found' 00:06:40.767 Process with pid 981421 is not found 00:06:40.767 12:26:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 981486 ]] 00:06:40.767 12:26:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 981486 00:06:40.767 12:26:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 981486 ']' 00:06:40.767 12:26:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 981486 00:06:40.767 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (981486) - No such process 00:06:40.767 12:26:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 981486 is not found' 00:06:40.767 Process with pid 981486 is not found 00:06:40.767 12:26:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:40.767 00:06:40.767 real 0m15.530s 00:06:40.767 user 0m25.896s 00:06:40.767 sys 0m5.944s 00:06:40.767 12:26:46 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.767 12:26:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.767 ************************************ 00:06:40.767 END TEST cpu_locks 00:06:40.767 ************************************ 00:06:40.767 00:06:40.767 real 0m39.678s 00:06:40.767 user 1m13.070s 00:06:40.767 sys 0m10.110s 00:06:40.767 12:26:46 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.767 12:26:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:40.767 ************************************ 00:06:40.767 END TEST event 00:06:40.767 ************************************ 00:06:40.767 12:26:46 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:40.767 12:26:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.767 12:26:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.767 12:26:46 -- common/autotest_common.sh@10 -- # set +x 00:06:40.767 ************************************ 00:06:40.767 START TEST thread 00:06:40.767 ************************************ 00:06:40.767 12:26:46 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:41.027 * Looking for test storage... 00:06:41.027 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:41.027 12:26:46 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:41.027 12:26:46 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:41.027 12:26:46 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:41.027 12:26:46 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:41.027 12:26:46 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.027 12:26:46 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.027 12:26:46 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.027 12:26:46 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.027 12:26:46 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.027 12:26:46 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.027 12:26:46 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.027 12:26:46 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.027 12:26:46 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.027 12:26:46 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.027 12:26:46 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.027 12:26:46 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:41.027 12:26:46 thread -- scripts/common.sh@345 -- # : 1 00:06:41.027 12:26:46 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.027 12:26:46 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.027 12:26:46 thread -- scripts/common.sh@365 -- # decimal 1 00:06:41.027 12:26:46 thread -- scripts/common.sh@353 -- # local d=1 00:06:41.027 12:26:46 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.027 12:26:46 thread -- scripts/common.sh@355 -- # echo 1 00:06:41.027 12:26:46 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.027 12:26:46 thread -- scripts/common.sh@366 -- # decimal 2 00:06:41.027 12:26:46 thread -- scripts/common.sh@353 -- # local d=2 00:06:41.027 12:26:46 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.027 12:26:46 thread -- scripts/common.sh@355 -- # echo 2 00:06:41.027 12:26:46 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.027 12:26:46 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.027 12:26:46 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.027 12:26:46 thread -- scripts/common.sh@368 -- # return 0 00:06:41.027 12:26:46 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.027 12:26:46 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:41.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.027 --rc genhtml_branch_coverage=1 00:06:41.027 --rc genhtml_function_coverage=1 00:06:41.027 --rc genhtml_legend=1 00:06:41.027 --rc geninfo_all_blocks=1 00:06:41.027 --rc geninfo_unexecuted_blocks=1 00:06:41.027 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.027 ' 00:06:41.027 12:26:46 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:41.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.027 --rc genhtml_branch_coverage=1 00:06:41.027 --rc genhtml_function_coverage=1 00:06:41.027 --rc genhtml_legend=1 00:06:41.027 --rc geninfo_all_blocks=1 00:06:41.027 --rc geninfo_unexecuted_blocks=1 00:06:41.027 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.027 ' 00:06:41.027 12:26:46 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:41.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.027 --rc genhtml_branch_coverage=1 00:06:41.027 --rc genhtml_function_coverage=1 00:06:41.027 --rc genhtml_legend=1 00:06:41.027 --rc geninfo_all_blocks=1 00:06:41.027 --rc geninfo_unexecuted_blocks=1 00:06:41.027 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.027 ' 00:06:41.027 12:26:46 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:41.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.027 --rc genhtml_branch_coverage=1 00:06:41.027 --rc genhtml_function_coverage=1 00:06:41.027 --rc genhtml_legend=1 00:06:41.027 --rc geninfo_all_blocks=1 00:06:41.027 --rc geninfo_unexecuted_blocks=1 00:06:41.027 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.027 ' 00:06:41.027 12:26:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:41.027 12:26:46 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:41.027 12:26:46 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.027 12:26:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.027 ************************************ 00:06:41.027 START TEST thread_poller_perf 00:06:41.027 ************************************ 00:06:41.027 12:26:46 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:41.027 [2024-12-16 12:26:46.509873] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:41.027 [2024-12-16 12:26:46.509929] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid981941 ] 00:06:41.027 [2024-12-16 12:26:46.577804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.286 [2024-12-16 12:26:46.618698] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.286 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:42.224 [2024-12-16T11:26:47.790Z] ====================================== 00:06:42.224 [2024-12-16T11:26:47.790Z] busy:2504700176 (cyc) 00:06:42.224 [2024-12-16T11:26:47.790Z] total_run_count: 868000 00:06:42.224 [2024-12-16T11:26:47.790Z] tsc_hz: 2500000000 (cyc) 00:06:42.224 [2024-12-16T11:26:47.790Z] ====================================== 00:06:42.224 [2024-12-16T11:26:47.790Z] poller_cost: 2885 (cyc), 1154 (nsec) 00:06:42.224 00:06:42.224 real 0m1.155s 00:06:42.224 user 0m1.081s 00:06:42.224 sys 0m0.071s 00:06:42.224 12:26:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.224 12:26:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:42.224 ************************************ 00:06:42.224 END TEST thread_poller_perf 00:06:42.224 ************************************ 00:06:42.224 12:26:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:42.224 12:26:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:42.224 12:26:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.224 12:26:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.224 ************************************ 00:06:42.224 START TEST thread_poller_perf 00:06:42.224 ************************************ 00:06:42.224 12:26:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:42.224 [2024-12-16 12:26:47.750563] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:42.224 [2024-12-16 12:26:47.750649] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid982154 ] 00:06:42.483 [2024-12-16 12:26:47.823460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.483 [2024-12-16 12:26:47.862194] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.483 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:43.424 [2024-12-16T11:26:48.990Z] ====================================== 00:06:43.424 [2024-12-16T11:26:48.990Z] busy:2501466872 (cyc) 00:06:43.424 [2024-12-16T11:26:48.990Z] total_run_count: 11988000 00:06:43.424 [2024-12-16T11:26:48.990Z] tsc_hz: 2500000000 (cyc) 00:06:43.424 [2024-12-16T11:26:48.990Z] ====================================== 00:06:43.424 [2024-12-16T11:26:48.990Z] poller_cost: 208 (cyc), 83 (nsec) 00:06:43.424 00:06:43.424 real 0m1.165s 00:06:43.424 user 0m1.078s 00:06:43.424 sys 0m0.084s 00:06:43.424 12:26:48 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.424 12:26:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.424 ************************************ 00:06:43.424 END TEST thread_poller_perf 00:06:43.424 ************************************ 00:06:43.424 12:26:48 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:43.424 12:26:48 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:43.424 12:26:48 thread -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.424 12:26:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.424 12:26:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.424 ************************************ 00:06:43.424 START TEST thread_spdk_lock 00:06:43.424 ************************************ 00:06:43.424 12:26:48 thread.thread_spdk_lock -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:43.424 [2024-12-16 12:26:48.985601] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:43.424 [2024-12-16 12:26:48.985691] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid982434 ] 00:06:43.683 [2024-12-16 12:26:49.058009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.683 [2024-12-16 12:26:49.099355] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.683 [2024-12-16 12:26:49.099359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.251 [2024-12-16 12:26:49.598600] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 990:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:44.251 [2024-12-16 12:26:49.598644] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3214:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:44.251 [2024-12-16 12:26:49.598659] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3169:sspin_stacks_print: *ERROR*: spinlock 0x14e4380 00:06:44.251 [2024-12-16 12:26:49.599395] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 885:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:44.251 [2024-12-16 12:26:49.599499] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1051:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:44.251 [2024-12-16 12:26:49.599519] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 885:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:44.251 Starting test contend 00:06:44.251 Worker Delay Wait us Hold us Total us 00:06:44.251 0 3 172049 188632 360681 00:06:44.251 1 5 87710 289842 377552 00:06:44.251 PASS test contend 00:06:44.251 Starting test hold_by_poller 00:06:44.251 PASS test hold_by_poller 00:06:44.251 Starting test hold_by_message 00:06:44.251 PASS test hold_by_message 00:06:44.251 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:06:44.251 100014 assertions passed 00:06:44.251 0 assertions failed 00:06:44.251 00:06:44.251 real 0m0.665s 00:06:44.251 user 0m1.079s 00:06:44.251 sys 0m0.084s 00:06:44.251 12:26:49 thread.thread_spdk_lock -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.251 12:26:49 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:06:44.251 ************************************ 00:06:44.251 END TEST thread_spdk_lock 00:06:44.251 ************************************ 00:06:44.251 00:06:44.251 real 0m3.395s 00:06:44.251 user 0m3.425s 00:06:44.251 sys 0m0.478s 00:06:44.251 12:26:49 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.251 12:26:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.251 ************************************ 00:06:44.251 END TEST thread 00:06:44.251 ************************************ 00:06:44.251 12:26:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:44.251 12:26:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:44.251 12:26:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.251 12:26:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.251 12:26:49 -- common/autotest_common.sh@10 -- # set +x 00:06:44.251 ************************************ 00:06:44.251 START TEST app_cmdline 00:06:44.251 ************************************ 00:06:44.251 12:26:49 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:44.511 * Looking for test storage... 00:06:44.511 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:44.511 12:26:49 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:44.511 12:26:49 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:44.511 12:26:49 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:44.511 12:26:49 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.511 12:26:49 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:44.511 12:26:49 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.511 12:26:49 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:44.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.511 --rc genhtml_branch_coverage=1 00:06:44.511 --rc genhtml_function_coverage=1 00:06:44.511 --rc genhtml_legend=1 00:06:44.511 --rc geninfo_all_blocks=1 00:06:44.511 --rc geninfo_unexecuted_blocks=1 00:06:44.511 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:44.511 ' 00:06:44.511 12:26:49 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:44.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.511 --rc genhtml_branch_coverage=1 00:06:44.511 --rc genhtml_function_coverage=1 00:06:44.511 --rc genhtml_legend=1 00:06:44.511 --rc geninfo_all_blocks=1 00:06:44.511 --rc geninfo_unexecuted_blocks=1 00:06:44.511 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:44.511 ' 00:06:44.511 12:26:49 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:44.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.511 --rc genhtml_branch_coverage=1 00:06:44.511 --rc genhtml_function_coverage=1 00:06:44.511 --rc genhtml_legend=1 00:06:44.511 --rc geninfo_all_blocks=1 00:06:44.511 --rc geninfo_unexecuted_blocks=1 00:06:44.511 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:44.511 ' 00:06:44.511 12:26:49 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:44.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.511 --rc genhtml_branch_coverage=1 00:06:44.511 --rc genhtml_function_coverage=1 00:06:44.511 --rc genhtml_legend=1 00:06:44.511 --rc geninfo_all_blocks=1 00:06:44.511 --rc geninfo_unexecuted_blocks=1 00:06:44.511 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:44.511 ' 00:06:44.511 12:26:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:44.511 12:26:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=982758 00:06:44.512 12:26:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 982758 00:06:44.512 12:26:49 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:44.512 12:26:49 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 982758 ']' 00:06:44.512 12:26:49 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.512 12:26:49 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.512 12:26:49 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.512 12:26:49 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.512 12:26:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:44.512 [2024-12-16 12:26:49.937266] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:44.512 [2024-12-16 12:26:49.937329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid982758 ] 00:06:44.512 [2024-12-16 12:26:50.007170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.512 [2024-12-16 12:26:50.053564] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.771 12:26:50 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.771 12:26:50 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:44.771 12:26:50 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:45.030 { 00:06:45.030 "version": "SPDK v25.01-pre git sha1 a393e5e6e", 00:06:45.030 "fields": { 00:06:45.030 "major": 25, 00:06:45.030 "minor": 1, 00:06:45.030 "patch": 0, 00:06:45.030 "suffix": "-pre", 00:06:45.030 "commit": "a393e5e6e" 00:06:45.030 } 00:06:45.030 } 00:06:45.030 12:26:50 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:45.030 12:26:50 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:45.030 12:26:50 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:45.030 12:26:50 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:45.030 12:26:50 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:45.030 12:26:50 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.030 12:26:50 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:45.030 12:26:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.030 12:26:50 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:45.030 12:26:50 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.030 12:26:50 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:45.030 12:26:50 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:45.030 12:26:50 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.030 12:26:50 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:45.030 12:26:50 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.030 12:26:50 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:45.030 12:26:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.030 12:26:50 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:45.030 12:26:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.030 12:26:50 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:45.030 12:26:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.030 12:26:50 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:45.030 12:26:50 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:45.030 12:26:50 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.290 request: 00:06:45.290 { 00:06:45.290 "method": "env_dpdk_get_mem_stats", 00:06:45.290 "req_id": 1 00:06:45.290 } 00:06:45.290 Got JSON-RPC error response 00:06:45.290 response: 00:06:45.290 { 00:06:45.290 "code": -32601, 00:06:45.290 "message": "Method not found" 00:06:45.290 } 00:06:45.290 12:26:50 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:45.290 12:26:50 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:45.290 12:26:50 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:45.290 12:26:50 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:45.290 12:26:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 982758 00:06:45.290 12:26:50 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 982758 ']' 00:06:45.290 12:26:50 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 982758 00:06:45.290 12:26:50 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:45.290 12:26:50 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.290 12:26:50 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 982758 00:06:45.290 12:26:50 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.290 12:26:50 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.290 12:26:50 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 982758' 00:06:45.290 killing process with pid 982758 00:06:45.290 12:26:50 app_cmdline -- common/autotest_common.sh@973 -- # kill 982758 00:06:45.290 12:26:50 app_cmdline -- common/autotest_common.sh@978 -- # wait 982758 00:06:45.549 00:06:45.549 real 0m1.302s 00:06:45.549 user 0m1.480s 00:06:45.549 sys 0m0.488s 00:06:45.549 12:26:51 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.549 12:26:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.549 ************************************ 00:06:45.549 END TEST app_cmdline 00:06:45.549 ************************************ 00:06:45.549 12:26:51 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:45.549 12:26:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.549 12:26:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.549 12:26:51 -- common/autotest_common.sh@10 -- # set +x 00:06:45.809 ************************************ 00:06:45.809 START TEST version 00:06:45.809 ************************************ 00:06:45.809 12:26:51 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:45.809 * Looking for test storage... 00:06:45.809 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:45.809 12:26:51 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:45.809 12:26:51 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:45.809 12:26:51 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:45.809 12:26:51 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:45.809 12:26:51 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.809 12:26:51 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.809 12:26:51 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.809 12:26:51 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.809 12:26:51 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.809 12:26:51 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.809 12:26:51 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.809 12:26:51 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.809 12:26:51 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.809 12:26:51 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.809 12:26:51 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.809 12:26:51 version -- scripts/common.sh@344 -- # case "$op" in 00:06:45.809 12:26:51 version -- scripts/common.sh@345 -- # : 1 00:06:45.809 12:26:51 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.809 12:26:51 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.809 12:26:51 version -- scripts/common.sh@365 -- # decimal 1 00:06:45.809 12:26:51 version -- scripts/common.sh@353 -- # local d=1 00:06:45.809 12:26:51 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.809 12:26:51 version -- scripts/common.sh@355 -- # echo 1 00:06:45.809 12:26:51 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.809 12:26:51 version -- scripts/common.sh@366 -- # decimal 2 00:06:45.809 12:26:51 version -- scripts/common.sh@353 -- # local d=2 00:06:45.810 12:26:51 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.810 12:26:51 version -- scripts/common.sh@355 -- # echo 2 00:06:45.810 12:26:51 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.810 12:26:51 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.810 12:26:51 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.810 12:26:51 version -- scripts/common.sh@368 -- # return 0 00:06:45.810 12:26:51 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.810 12:26:51 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:45.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.810 --rc genhtml_branch_coverage=1 00:06:45.810 --rc genhtml_function_coverage=1 00:06:45.810 --rc genhtml_legend=1 00:06:45.810 --rc geninfo_all_blocks=1 00:06:45.810 --rc geninfo_unexecuted_blocks=1 00:06:45.810 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:45.810 ' 00:06:45.810 12:26:51 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:45.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.810 --rc genhtml_branch_coverage=1 00:06:45.810 --rc genhtml_function_coverage=1 00:06:45.810 --rc genhtml_legend=1 00:06:45.810 --rc geninfo_all_blocks=1 00:06:45.810 --rc geninfo_unexecuted_blocks=1 00:06:45.810 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:45.810 ' 00:06:45.810 12:26:51 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:45.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.810 --rc genhtml_branch_coverage=1 00:06:45.810 --rc genhtml_function_coverage=1 00:06:45.810 --rc genhtml_legend=1 00:06:45.810 --rc geninfo_all_blocks=1 00:06:45.810 --rc geninfo_unexecuted_blocks=1 00:06:45.810 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:45.810 ' 00:06:45.810 12:26:51 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:45.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.810 --rc genhtml_branch_coverage=1 00:06:45.810 --rc genhtml_function_coverage=1 00:06:45.810 --rc genhtml_legend=1 00:06:45.810 --rc geninfo_all_blocks=1 00:06:45.810 --rc geninfo_unexecuted_blocks=1 00:06:45.810 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:45.810 ' 00:06:45.810 12:26:51 version -- app/version.sh@17 -- # get_header_version major 00:06:45.810 12:26:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:45.810 12:26:51 version -- app/version.sh@14 -- # cut -f2 00:06:45.810 12:26:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.810 12:26:51 version -- app/version.sh@17 -- # major=25 00:06:45.810 12:26:51 version -- app/version.sh@18 -- # get_header_version minor 00:06:45.810 12:26:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:45.810 12:26:51 version -- app/version.sh@14 -- # cut -f2 00:06:45.810 12:26:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.810 12:26:51 version -- app/version.sh@18 -- # minor=1 00:06:45.810 12:26:51 version -- app/version.sh@19 -- # get_header_version patch 00:06:45.810 12:26:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:45.810 12:26:51 version -- app/version.sh@14 -- # cut -f2 00:06:45.810 12:26:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.810 12:26:51 version -- app/version.sh@19 -- # patch=0 00:06:45.810 12:26:51 version -- app/version.sh@20 -- # get_header_version suffix 00:06:45.810 12:26:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:45.810 12:26:51 version -- app/version.sh@14 -- # cut -f2 00:06:45.810 12:26:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.810 12:26:51 version -- app/version.sh@20 -- # suffix=-pre 00:06:45.810 12:26:51 version -- app/version.sh@22 -- # version=25.1 00:06:45.810 12:26:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:45.810 12:26:51 version -- app/version.sh@28 -- # version=25.1rc0 00:06:45.810 12:26:51 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:45.810 12:26:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:45.810 12:26:51 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:45.810 12:26:51 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:45.810 00:06:45.810 real 0m0.240s 00:06:45.810 user 0m0.141s 00:06:45.810 sys 0m0.146s 00:06:45.810 12:26:51 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.810 12:26:51 version -- common/autotest_common.sh@10 -- # set +x 00:06:45.810 ************************************ 00:06:45.810 END TEST version 00:06:45.810 ************************************ 00:06:46.069 12:26:51 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:46.069 12:26:51 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:46.069 12:26:51 -- spdk/autotest.sh@194 -- # uname -s 00:06:46.069 12:26:51 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:46.069 12:26:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:46.069 12:26:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:46.069 12:26:51 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:46.069 12:26:51 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:46.069 12:26:51 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:46.069 12:26:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:46.069 12:26:51 -- common/autotest_common.sh@10 -- # set +x 00:06:46.069 12:26:51 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:46.069 12:26:51 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:46.069 12:26:51 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:06:46.069 12:26:51 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:06:46.069 12:26:51 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:06:46.069 12:26:51 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:06:46.069 12:26:51 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:06:46.069 12:26:51 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:06:46.069 12:26:51 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:06:46.069 12:26:51 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:06:46.069 12:26:51 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:06:46.069 12:26:51 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:06:46.069 12:26:51 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:06:46.069 12:26:51 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:06:46.069 12:26:51 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:06:46.069 12:26:51 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:06:46.069 12:26:51 -- spdk/autotest.sh@374 -- # [[ 1 -eq 1 ]] 00:06:46.069 12:26:51 -- spdk/autotest.sh@375 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:46.069 12:26:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.070 12:26:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.070 12:26:51 -- common/autotest_common.sh@10 -- # set +x 00:06:46.070 ************************************ 00:06:46.070 START TEST llvm_fuzz 00:06:46.070 ************************************ 00:06:46.070 12:26:51 llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:46.070 * Looking for test storage... 00:06:46.070 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:46.070 12:26:51 llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.070 12:26:51 llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.070 12:26:51 llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.329 12:26:51 llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.329 12:26:51 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:46.329 12:26:51 llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.329 12:26:51 llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.330 --rc genhtml_branch_coverage=1 00:06:46.330 --rc genhtml_function_coverage=1 00:06:46.330 --rc genhtml_legend=1 00:06:46.330 --rc geninfo_all_blocks=1 00:06:46.330 --rc geninfo_unexecuted_blocks=1 00:06:46.330 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.330 ' 00:06:46.330 12:26:51 llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.330 --rc genhtml_branch_coverage=1 00:06:46.330 --rc genhtml_function_coverage=1 00:06:46.330 --rc genhtml_legend=1 00:06:46.330 --rc geninfo_all_blocks=1 00:06:46.330 --rc geninfo_unexecuted_blocks=1 00:06:46.330 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.330 ' 00:06:46.330 12:26:51 llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.330 --rc genhtml_branch_coverage=1 00:06:46.330 --rc genhtml_function_coverage=1 00:06:46.330 --rc genhtml_legend=1 00:06:46.330 --rc geninfo_all_blocks=1 00:06:46.330 --rc geninfo_unexecuted_blocks=1 00:06:46.330 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.330 ' 00:06:46.330 12:26:51 llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.330 --rc genhtml_branch_coverage=1 00:06:46.330 --rc genhtml_function_coverage=1 00:06:46.330 --rc genhtml_legend=1 00:06:46.330 --rc geninfo_all_blocks=1 00:06:46.330 --rc geninfo_unexecuted_blocks=1 00:06:46.330 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.330 ' 00:06:46.330 12:26:51 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:46.330 12:26:51 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:46.330 12:26:51 llvm_fuzz -- common/autotest_common.sh@550 -- # fuzzers=() 00:06:46.330 12:26:51 llvm_fuzz -- common/autotest_common.sh@550 -- # local fuzzers 00:06:46.330 12:26:51 llvm_fuzz -- common/autotest_common.sh@552 -- # [[ -n '' ]] 00:06:46.330 12:26:51 llvm_fuzz -- common/autotest_common.sh@555 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:46.330 12:26:51 llvm_fuzz -- common/autotest_common.sh@556 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:46.330 12:26:51 llvm_fuzz -- common/autotest_common.sh@559 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:46.330 12:26:51 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:46.330 12:26:51 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:46.330 12:26:51 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:46.330 12:26:51 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:46.330 12:26:51 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:46.330 12:26:51 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:46.330 12:26:51 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:46.330 12:26:51 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:46.330 12:26:51 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:46.330 12:26:51 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.330 12:26:51 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.330 12:26:51 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:46.330 ************************************ 00:06:46.330 START TEST nvmf_llvm_fuzz 00:06:46.330 ************************************ 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:46.330 * Looking for test storage... 00:06:46.330 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.330 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:46.593 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:46.593 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.593 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:46.593 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.593 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:46.593 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:46.593 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.593 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:46.593 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.593 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.593 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.593 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:46.593 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.593 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.593 --rc genhtml_branch_coverage=1 00:06:46.593 --rc genhtml_function_coverage=1 00:06:46.593 --rc genhtml_legend=1 00:06:46.593 --rc geninfo_all_blocks=1 00:06:46.593 --rc geninfo_unexecuted_blocks=1 00:06:46.593 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.593 ' 00:06:46.593 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.593 --rc genhtml_branch_coverage=1 00:06:46.593 --rc genhtml_function_coverage=1 00:06:46.593 --rc genhtml_legend=1 00:06:46.593 --rc geninfo_all_blocks=1 00:06:46.593 --rc geninfo_unexecuted_blocks=1 00:06:46.593 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.593 ' 00:06:46.593 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.593 --rc genhtml_branch_coverage=1 00:06:46.593 --rc genhtml_function_coverage=1 00:06:46.593 --rc genhtml_legend=1 00:06:46.593 --rc geninfo_all_blocks=1 00:06:46.593 --rc geninfo_unexecuted_blocks=1 00:06:46.593 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.593 ' 00:06:46.593 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.593 --rc genhtml_branch_coverage=1 00:06:46.593 --rc genhtml_function_coverage=1 00:06:46.593 --rc genhtml_legend=1 00:06:46.594 --rc geninfo_all_blocks=1 00:06:46.594 --rc geninfo_unexecuted_blocks=1 00:06:46.594 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.594 ' 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:46.594 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:46.595 #define SPDK_CONFIG_H 00:06:46.595 #define SPDK_CONFIG_AIO_FSDEV 1 00:06:46.595 #define SPDK_CONFIG_APPS 1 00:06:46.595 #define SPDK_CONFIG_ARCH native 00:06:46.595 #undef SPDK_CONFIG_ASAN 00:06:46.595 #undef SPDK_CONFIG_AVAHI 00:06:46.595 #undef SPDK_CONFIG_CET 00:06:46.595 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:06:46.595 #define SPDK_CONFIG_COVERAGE 1 00:06:46.595 #define SPDK_CONFIG_CROSS_PREFIX 00:06:46.595 #undef SPDK_CONFIG_CRYPTO 00:06:46.595 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:46.595 #undef SPDK_CONFIG_CUSTOMOCF 00:06:46.595 #undef SPDK_CONFIG_DAOS 00:06:46.595 #define SPDK_CONFIG_DAOS_DIR 00:06:46.595 #define SPDK_CONFIG_DEBUG 1 00:06:46.595 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:46.595 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:46.595 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:46.595 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:46.595 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:46.595 #undef SPDK_CONFIG_DPDK_UADK 00:06:46.595 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:46.595 #define SPDK_CONFIG_EXAMPLES 1 00:06:46.595 #undef SPDK_CONFIG_FC 00:06:46.595 #define SPDK_CONFIG_FC_PATH 00:06:46.595 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:46.595 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:46.595 #define SPDK_CONFIG_FSDEV 1 00:06:46.595 #undef SPDK_CONFIG_FUSE 00:06:46.595 #define SPDK_CONFIG_FUZZER 1 00:06:46.595 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:46.595 #undef SPDK_CONFIG_GOLANG 00:06:46.595 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:46.595 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:46.595 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:46.595 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:46.595 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:46.595 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:46.595 #undef SPDK_CONFIG_HAVE_LZ4 00:06:46.595 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:06:46.595 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:06:46.595 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:46.595 #define SPDK_CONFIG_IDXD 1 00:06:46.595 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:46.595 #undef SPDK_CONFIG_IPSEC_MB 00:06:46.595 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:46.595 #define SPDK_CONFIG_ISAL 1 00:06:46.595 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:46.595 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:46.595 #define SPDK_CONFIG_LIBDIR 00:06:46.595 #undef SPDK_CONFIG_LTO 00:06:46.595 #define SPDK_CONFIG_MAX_LCORES 128 00:06:46.595 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:06:46.595 #define SPDK_CONFIG_NVME_CUSE 1 00:06:46.595 #undef SPDK_CONFIG_OCF 00:06:46.595 #define SPDK_CONFIG_OCF_PATH 00:06:46.595 #define SPDK_CONFIG_OPENSSL_PATH 00:06:46.595 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:46.595 #define SPDK_CONFIG_PGO_DIR 00:06:46.595 #undef SPDK_CONFIG_PGO_USE 00:06:46.595 #define SPDK_CONFIG_PREFIX /usr/local 00:06:46.595 #undef SPDK_CONFIG_RAID5F 00:06:46.595 #undef SPDK_CONFIG_RBD 00:06:46.595 #define SPDK_CONFIG_RDMA 1 00:06:46.595 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:46.595 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:46.595 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:46.595 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:46.595 #undef SPDK_CONFIG_SHARED 00:06:46.595 #undef SPDK_CONFIG_SMA 00:06:46.595 #define SPDK_CONFIG_TESTS 1 00:06:46.595 #undef SPDK_CONFIG_TSAN 00:06:46.595 #define SPDK_CONFIG_UBLK 1 00:06:46.595 #define SPDK_CONFIG_UBSAN 1 00:06:46.595 #undef SPDK_CONFIG_UNIT_TESTS 00:06:46.595 #undef SPDK_CONFIG_URING 00:06:46.595 #define SPDK_CONFIG_URING_PATH 00:06:46.595 #undef SPDK_CONFIG_URING_ZNS 00:06:46.595 #undef SPDK_CONFIG_USDT 00:06:46.595 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:46.595 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:46.595 #define SPDK_CONFIG_VFIO_USER 1 00:06:46.595 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:46.595 #define SPDK_CONFIG_VHOST 1 00:06:46.595 #define SPDK_CONFIG_VIRTIO 1 00:06:46.595 #undef SPDK_CONFIG_VTUNE 00:06:46.595 #define SPDK_CONFIG_VTUNE_DIR 00:06:46.595 #define SPDK_CONFIG_WERROR 1 00:06:46.595 #define SPDK_CONFIG_WPDK_DIR 00:06:46.595 #undef SPDK_CONFIG_XNVME 00:06:46.595 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:46.595 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:46.596 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 983202 ]] 00:06:46.597 12:26:51 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 983202 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.e15Era 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.e15Era/tests/nvmf /tmp/spdk.e15Era 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=785162240 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=4499267584 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=54199177216 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730586624 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=7531409408 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30861864960 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865293312 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=12340117504 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346118144 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=6000640 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:46.597 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30865096704 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865293312 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=196608 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=6173044736 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173057024 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:06:46.598 * Looking for test storage... 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=54199177216 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=9746001920 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:46.598 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1698 -- # set -o errtrace 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1703 -- # true 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # xtrace_fd 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.598 --rc genhtml_branch_coverage=1 00:06:46.598 --rc genhtml_function_coverage=1 00:06:46.598 --rc genhtml_legend=1 00:06:46.598 --rc geninfo_all_blocks=1 00:06:46.598 --rc geninfo_unexecuted_blocks=1 00:06:46.598 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.598 ' 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.598 --rc genhtml_branch_coverage=1 00:06:46.598 --rc genhtml_function_coverage=1 00:06:46.598 --rc genhtml_legend=1 00:06:46.598 --rc geninfo_all_blocks=1 00:06:46.598 --rc geninfo_unexecuted_blocks=1 00:06:46.598 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.598 ' 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.598 --rc genhtml_branch_coverage=1 00:06:46.598 --rc genhtml_function_coverage=1 00:06:46.598 --rc genhtml_legend=1 00:06:46.598 --rc geninfo_all_blocks=1 00:06:46.598 --rc geninfo_unexecuted_blocks=1 00:06:46.598 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.598 ' 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.598 --rc genhtml_branch_coverage=1 00:06:46.598 --rc genhtml_function_coverage=1 00:06:46.598 --rc genhtml_legend=1 00:06:46.598 --rc geninfo_all_blocks=1 00:06:46.598 --rc geninfo_unexecuted_blocks=1 00:06:46.598 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.598 ' 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:06:46.598 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:46.599 12:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:46.858 [2024-12-16 12:26:52.163705] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:46.858 [2024-12-16 12:26:52.163754] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid983275 ] 00:06:46.858 [2024-12-16 12:26:52.341470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.858 [2024-12-16 12:26:52.374337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.118 [2024-12-16 12:26:52.433242] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.118 [2024-12-16 12:26:52.449575] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:47.118 INFO: Running with entropic power schedule (0xFF, 100). 00:06:47.118 INFO: Seed: 834296180 00:06:47.118 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:06:47.118 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:06:47.118 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:47.118 INFO: A corpus is not provided, starting from an empty corpus 00:06:47.118 #2 INITED exec/s: 0 rss: 65Mb 00:06:47.118 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:47.118 This may also happen if the target rejected all inputs we tried so far 00:06:47.118 [2024-12-16 12:26:52.494911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.118 [2024-12-16 12:26:52.494939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.377 NEW_FUNC[1/717]: 0x43bbe8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:47.377 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:47.377 #6 NEW cov: 12136 ft: 12135 corp: 2/116b lim: 320 exec/s: 0 rss: 72Mb L: 115/115 MS: 4 InsertByte-EraseBytes-ChangeBit-InsertRepeatedBytes- 00:06:47.377 [2024-12-16 12:26:52.825746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.377 [2024-12-16 12:26:52.825782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.377 #7 NEW cov: 12249 ft: 12724 corp: 3/231b lim: 320 exec/s: 0 rss: 72Mb L: 115/115 MS: 1 CopyPart- 00:06:47.377 [2024-12-16 12:26:52.885849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.377 [2024-12-16 12:26:52.885879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.377 #13 NEW cov: 12255 ft: 12924 corp: 4/338b lim: 320 exec/s: 0 rss: 72Mb L: 107/115 MS: 1 EraseBytes- 00:06:47.636 [2024-12-16 12:26:52.946016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.636 [2024-12-16 12:26:52.946044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.636 #14 NEW cov: 12340 ft: 13315 corp: 5/445b lim: 320 exec/s: 0 rss: 72Mb L: 107/115 MS: 1 CrossOver- 00:06:47.636 [2024-12-16 12:26:53.006170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.636 [2024-12-16 12:26:53.006196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.636 #15 NEW cov: 12340 ft: 13440 corp: 6/552b lim: 320 exec/s: 0 rss: 72Mb L: 107/115 MS: 1 ChangeBit- 00:06:47.637 [2024-12-16 12:26:53.046264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.637 [2024-12-16 12:26:53.046290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.637 #16 NEW cov: 12340 ft: 13502 corp: 7/660b lim: 320 exec/s: 0 rss: 72Mb L: 108/115 MS: 1 InsertByte- 00:06:47.637 [2024-12-16 12:26:53.106707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.637 [2024-12-16 12:26:53.106733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.637 [2024-12-16 12:26:53.106789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b2) qid:0 cid:5 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL TRANSPORT DATA BLOCK TRANSPORT 0xb2b2b2b2b2b2b2b2 00:06:47.637 [2024-12-16 12:26:53.106803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.637 [2024-12-16 12:26:53.106859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b2) qid:0 cid:6 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL TRANSPORT DATA BLOCK TRANSPORT 0xb2b2b2b2b2b2b2b2 00:06:47.637 [2024-12-16 12:26:53.106873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.637 #17 NEW cov: 12341 ft: 13873 corp: 8/860b lim: 320 exec/s: 0 rss: 72Mb L: 200/200 MS: 1 CopyPart- 00:06:47.637 [2024-12-16 12:26:53.146517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.637 [2024-12-16 12:26:53.146542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.637 #18 NEW cov: 12341 ft: 13906 corp: 9/975b lim: 320 exec/s: 0 rss: 72Mb L: 115/200 MS: 1 ChangeBinInt- 00:06:47.637 [2024-12-16 12:26:53.186627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.637 [2024-12-16 12:26:53.186652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.896 #19 NEW cov: 12341 ft: 13919 corp: 10/1082b lim: 320 exec/s: 0 rss: 72Mb L: 107/200 MS: 1 ShuffleBytes- 00:06:47.896 [2024-12-16 12:26:53.246825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.896 [2024-12-16 12:26:53.246851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.896 #20 NEW cov: 12341 ft: 13951 corp: 11/1190b lim: 320 exec/s: 0 rss: 73Mb L: 108/200 MS: 1 ChangeBit- 00:06:47.896 [2024-12-16 12:26:53.307006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b22bb2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.896 [2024-12-16 12:26:53.307032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.896 #21 NEW cov: 12341 ft: 13982 corp: 12/1298b lim: 320 exec/s: 0 rss: 73Mb L: 108/200 MS: 1 ChangeByte- 00:06:47.896 [2024-12-16 12:26:53.347072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b22bb2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.896 [2024-12-16 12:26:53.347097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.896 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:06:47.896 #22 NEW cov: 12364 ft: 14003 corp: 13/1406b lim: 320 exec/s: 0 rss: 73Mb L: 108/200 MS: 1 ChangeBinInt- 00:06:47.896 [2024-12-16 12:26:53.407278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.896 [2024-12-16 12:26:53.407303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.896 #23 NEW cov: 12364 ft: 14010 corp: 14/1513b lim: 320 exec/s: 0 rss: 73Mb L: 107/200 MS: 1 ChangeByte- 00:06:47.896 [2024-12-16 12:26:53.447379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.896 [2024-12-16 12:26:53.447404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.155 #24 NEW cov: 12364 ft: 14044 corp: 15/1620b lim: 320 exec/s: 0 rss: 73Mb L: 107/200 MS: 1 ShuffleBytes- 00:06:48.155 [2024-12-16 12:26:53.487489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.155 [2024-12-16 12:26:53.487514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.155 #25 NEW cov: 12364 ft: 14047 corp: 16/1727b lim: 320 exec/s: 25 rss: 73Mb L: 107/200 MS: 1 CrossOver- 00:06:48.155 [2024-12-16 12:26:53.547672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.155 [2024-12-16 12:26:53.547697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.155 #26 NEW cov: 12364 ft: 14051 corp: 17/1835b lim: 320 exec/s: 26 rss: 73Mb L: 108/200 MS: 1 CopyPart- 00:06:48.155 [2024-12-16 12:26:53.587763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b22bb2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.155 [2024-12-16 12:26:53.587787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.155 #27 NEW cov: 12364 ft: 14127 corp: 18/1943b lim: 320 exec/s: 27 rss: 73Mb L: 108/200 MS: 1 ChangeBit- 00:06:48.155 [2024-12-16 12:26:53.627917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.155 [2024-12-16 12:26:53.627942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.155 #28 NEW cov: 12364 ft: 14134 corp: 19/2051b lim: 320 exec/s: 28 rss: 73Mb L: 108/200 MS: 1 CrossOver- 00:06:48.155 [2024-12-16 12:26:53.668027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.155 [2024-12-16 12:26:53.668051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.155 #29 NEW cov: 12364 ft: 14153 corp: 20/2158b lim: 320 exec/s: 29 rss: 73Mb L: 107/200 MS: 1 ChangeBinInt- 00:06:48.414 [2024-12-16 12:26:53.728191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.414 [2024-12-16 12:26:53.728215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.414 #30 NEW cov: 12364 ft: 14163 corp: 21/2265b lim: 320 exec/s: 30 rss: 73Mb L: 107/200 MS: 1 ChangeByte- 00:06:48.414 [2024-12-16 12:26:53.768289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.414 [2024-12-16 12:26:53.768314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.414 #31 NEW cov: 12364 ft: 14173 corp: 22/2372b lim: 320 exec/s: 31 rss: 73Mb L: 107/200 MS: 1 CrossOver- 00:06:48.414 [2024-12-16 12:26:53.808381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.414 [2024-12-16 12:26:53.808406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.414 #32 NEW cov: 12364 ft: 14197 corp: 23/2480b lim: 320 exec/s: 32 rss: 73Mb L: 108/200 MS: 1 InsertByte- 00:06:48.414 [2024-12-16 12:26:53.848519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.414 [2024-12-16 12:26:53.848544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.414 #33 NEW cov: 12364 ft: 14246 corp: 24/2587b lim: 320 exec/s: 33 rss: 73Mb L: 107/200 MS: 1 ShuffleBytes- 00:06:48.414 [2024-12-16 12:26:53.888670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.414 [2024-12-16 12:26:53.888694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.414 #34 NEW cov: 12364 ft: 14262 corp: 25/2678b lim: 320 exec/s: 34 rss: 73Mb L: 91/200 MS: 1 EraseBytes- 00:06:48.414 [2024-12-16 12:26:53.948838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.414 [2024-12-16 12:26:53.948863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.414 #35 NEW cov: 12364 ft: 14296 corp: 26/2785b lim: 320 exec/s: 35 rss: 73Mb L: 107/200 MS: 1 ChangeByte- 00:06:48.673 [2024-12-16 12:26:53.988893] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:b2b2b2b2 SGL TRANSPORT DATA BLOCK TRANSPORT 0xb2b292b2b2b2b2b2 00:06:48.673 [2024-12-16 12:26:53.988918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.673 #38 NEW cov: 12381 ft: 14674 corp: 27/2879b lim: 320 exec/s: 38 rss: 73Mb L: 94/200 MS: 3 InsertByte-EraseBytes-CrossOver- 00:06:48.673 [2024-12-16 12:26:54.029304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.673 [2024-12-16 12:26:54.029329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.673 [2024-12-16 12:26:54.029387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b2) qid:0 cid:5 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x4d4d4eb2b2b2b2b2 00:06:48.673 [2024-12-16 12:26:54.029401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.673 [2024-12-16 12:26:54.029461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b2) qid:0 cid:6 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL TRANSPORT DATA BLOCK TRANSPORT 0xb22bb2b2b2b2b2b2 00:06:48.673 [2024-12-16 12:26:54.029475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.673 #39 NEW cov: 12381 ft: 14699 corp: 28/3085b lim: 320 exec/s: 39 rss: 73Mb L: 206/206 MS: 1 CrossOver- 00:06:48.673 [2024-12-16 12:26:54.069283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.673 [2024-12-16 12:26:54.069309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.673 [2024-12-16 12:26:54.069367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b2) qid:0 cid:5 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL TRANSPORT DATA BLOCK TRANSPORT 0xb2b2b2b2b2b2b2b2 00:06:48.673 [2024-12-16 12:26:54.069381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.673 #40 NEW cov: 12381 ft: 14864 corp: 29/3272b lim: 320 exec/s: 40 rss: 73Mb L: 187/206 MS: 1 CrossOver- 00:06:48.673 [2024-12-16 12:26:54.109266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.673 [2024-12-16 12:26:54.109291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.673 #41 NEW cov: 12381 ft: 14888 corp: 30/3379b lim: 320 exec/s: 41 rss: 73Mb L: 107/206 MS: 1 ChangeBit- 00:06:48.673 [2024-12-16 12:26:54.169449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.673 [2024-12-16 12:26:54.169475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.673 #42 NEW cov: 12381 ft: 14893 corp: 31/3469b lim: 320 exec/s: 42 rss: 73Mb L: 90/206 MS: 1 EraseBytes- 00:06:48.673 [2024-12-16 12:26:54.229871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.673 [2024-12-16 12:26:54.229896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.673 [2024-12-16 12:26:54.229955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b2) qid:0 cid:5 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x4d4d4eb2b2b2b2b2 00:06:48.673 [2024-12-16 12:26:54.229969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.673 [2024-12-16 12:26:54.230024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b2) qid:0 cid:6 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL TRANSPORT DATA BLOCK TRANSPORT 0xb22bb2b2b2b2b2b2 00:06:48.673 [2024-12-16 12:26:54.230038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.932 #43 NEW cov: 12381 ft: 14899 corp: 32/3676b lim: 320 exec/s: 43 rss: 74Mb L: 207/207 MS: 1 InsertByte- 00:06:48.932 [2024-12-16 12:26:54.289836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.932 [2024-12-16 12:26:54.289862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.932 #44 NEW cov: 12381 ft: 14944 corp: 33/3783b lim: 320 exec/s: 44 rss: 74Mb L: 107/207 MS: 1 CopyPart- 00:06:48.932 [2024-12-16 12:26:54.350225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.932 [2024-12-16 12:26:54.350253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.932 [2024-12-16 12:26:54.350313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b2) qid:0 cid:5 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x4d4d4eb2b2b2b2b2 00:06:48.933 [2024-12-16 12:26:54.350326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.933 [2024-12-16 12:26:54.350383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b2) qid:0 cid:6 nsid:deeaba9c cdw10:b2b2b2b2 cdw11:3fb2b2b2 SGL TRANSPORT DATA BLOCK TRANSPORT 0xb2b2b2b2b2b2b2b2 00:06:48.933 [2024-12-16 12:26:54.350396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.933 #45 NEW cov: 12381 ft: 14948 corp: 34/3998b lim: 320 exec/s: 45 rss: 74Mb L: 215/215 MS: 1 CMP- DE: "\001\005_\234\272\352\3362"- 00:06:48.933 [2024-12-16 12:26:54.410130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.933 [2024-12-16 12:26:54.410155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.933 #46 NEW cov: 12381 ft: 14960 corp: 35/4105b lim: 320 exec/s: 46 rss: 74Mb L: 107/215 MS: 1 ShuffleBytes- 00:06:48.933 [2024-12-16 12:26:54.450246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.933 [2024-12-16 12:26:54.450271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.933 #47 NEW cov: 12381 ft: 14970 corp: 36/4220b lim: 320 exec/s: 47 rss: 74Mb L: 115/215 MS: 1 CrossOver- 00:06:48.933 [2024-12-16 12:26:54.490485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a5) qid:0 cid:4 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.933 [2024-12-16 12:26:54.490511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.933 [2024-12-16 12:26:54.490569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b2) qid:0 cid:5 nsid:b2b2b2b2 cdw10:b2b2b2b2 cdw11:b2b2b2b2 SGL TRANSPORT DATA BLOCK TRANSPORT 0xb2b2b2b2b2b2b2b2 00:06:48.933 [2024-12-16 12:26:54.490582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.192 #48 NEW cov: 12381 ft: 14982 corp: 37/4408b lim: 320 exec/s: 24 rss: 74Mb L: 188/215 MS: 1 CopyPart- 00:06:49.192 #48 DONE cov: 12381 ft: 14982 corp: 37/4408b lim: 320 exec/s: 24 rss: 74Mb 00:06:49.192 ###### Recommended dictionary. ###### 00:06:49.192 "\001\005_\234\272\352\3362" # Uses: 0 00:06:49.192 ###### End of recommended dictionary. ###### 00:06:49.192 Done 48 runs in 2 second(s) 00:06:49.192 12:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:49.192 12:26:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:49.192 12:26:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:49.192 12:26:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:49.192 12:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:49.192 12:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:49.192 12:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:49.192 12:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:49.192 12:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:49.192 12:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:49.192 12:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:49.192 12:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:49.192 12:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:49.192 12:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:49.192 12:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:49.192 12:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:49.192 12:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:49.192 12:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:49.192 12:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:49.192 [2024-12-16 12:26:54.681951] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:49.192 [2024-12-16 12:26:54.682019] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid983802 ] 00:06:49.452 [2024-12-16 12:26:54.864297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.452 [2024-12-16 12:26:54.897269] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.452 [2024-12-16 12:26:54.956096] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.452 [2024-12-16 12:26:54.972400] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:49.452 INFO: Running with entropic power schedule (0xFF, 100). 00:06:49.452 INFO: Seed: 3357298739 00:06:49.452 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:06:49.452 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:06:49.452 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:49.452 INFO: A corpus is not provided, starting from an empty corpus 00:06:49.452 #2 INITED exec/s: 0 rss: 65Mb 00:06:49.452 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:49.452 This may also happen if the target rejected all inputs we tried so far 00:06:49.713 [2024-12-16 12:26:55.048711] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:49.713 [2024-12-16 12:26:55.048880] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:49.713 [2024-12-16 12:26:55.049044] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:49.713 [2024-12-16 12:26:55.049204] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:49.713 [2024-12-16 12:26:55.049566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.713 [2024-12-16 12:26:55.049602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.713 [2024-12-16 12:26:55.049746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.713 [2024-12-16 12:26:55.049764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.713 [2024-12-16 12:26:55.049893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.713 [2024-12-16 12:26:55.049912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.713 [2024-12-16 12:26:55.050046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.713 [2024-12-16 12:26:55.050063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.972 NEW_FUNC[1/712]: 0x43c4e8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:49.972 NEW_FUNC[2/712]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:49.972 #3 NEW cov: 12154 ft: 12151 corp: 2/29b lim: 30 exec/s: 0 rss: 71Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:06:49.972 [2024-12-16 12:26:55.389374] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:49.972 [2024-12-16 12:26:55.389544] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:49.972 [2024-12-16 12:26:55.389671] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:49.972 [2024-12-16 12:26:55.389816] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:49.972 [2024-12-16 12:26:55.390148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.972 [2024-12-16 12:26:55.390184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.972 [2024-12-16 12:26:55.390315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.972 [2024-12-16 12:26:55.390339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.972 [2024-12-16 12:26:55.390457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a7ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.972 [2024-12-16 12:26:55.390474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.972 [2024-12-16 12:26:55.390599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.972 [2024-12-16 12:26:55.390623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.972 #4 NEW cov: 12285 ft: 12835 corp: 3/57b lim: 30 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 ChangeByte- 00:06:49.972 [2024-12-16 12:26:55.459374] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:49.972 [2024-12-16 12:26:55.459523] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:49.972 [2024-12-16 12:26:55.459669] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:49.972 [2024-12-16 12:26:55.459982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.972 [2024-12-16 12:26:55.460013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.972 [2024-12-16 12:26:55.460132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.972 [2024-12-16 12:26:55.460151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.972 [2024-12-16 12:26:55.460273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.972 [2024-12-16 12:26:55.460294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.972 NEW_FUNC[1/5]: 0x1961c88 in nvme_complete_register_operations /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:725 00:06:49.972 NEW_FUNC[2/5]: 0x1974f28 in nvme_ctrlr_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1220 00:06:49.972 #5 NEW cov: 12320 ft: 13565 corp: 4/77b lim: 30 exec/s: 0 rss: 72Mb L: 20/28 MS: 1 EraseBytes- 00:06:49.972 [2024-12-16 12:26:55.509497] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:49.972 [2024-12-16 12:26:55.509648] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:49.972 [2024-12-16 12:26:55.509789] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:49.972 [2024-12-16 12:26:55.510136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.972 [2024-12-16 12:26:55.510167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.972 [2024-12-16 12:26:55.510299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.972 [2024-12-16 12:26:55.510317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.972 [2024-12-16 12:26:55.510434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.972 [2024-12-16 12:26:55.510451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.232 #6 NEW cov: 12405 ft: 13873 corp: 5/100b lim: 30 exec/s: 0 rss: 72Mb L: 23/28 MS: 1 CrossOver- 00:06:50.232 [2024-12-16 12:26:55.579732] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.232 [2024-12-16 12:26:55.579880] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.232 [2024-12-16 12:26:55.580028] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.232 [2024-12-16 12:26:55.580340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.232 [2024-12-16 12:26:55.580369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.232 [2024-12-16 12:26:55.580488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.232 [2024-12-16 12:26:55.580507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.232 [2024-12-16 12:26:55.580630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.232 [2024-12-16 12:26:55.580666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.232 #7 NEW cov: 12405 ft: 13949 corp: 6/123b lim: 30 exec/s: 0 rss: 72Mb L: 23/28 MS: 1 CopyPart- 00:06:50.232 [2024-12-16 12:26:55.649913] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.232 [2024-12-16 12:26:55.650075] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.232 [2024-12-16 12:26:55.650212] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.232 [2024-12-16 12:26:55.650555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.232 [2024-12-16 12:26:55.650584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.232 [2024-12-16 12:26:55.650703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.232 [2024-12-16 12:26:55.650724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.232 [2024-12-16 12:26:55.650846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.232 [2024-12-16 12:26:55.650865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.232 #8 NEW cov: 12405 ft: 14035 corp: 7/146b lim: 30 exec/s: 0 rss: 72Mb L: 23/28 MS: 1 CrossOver- 00:06:50.232 [2024-12-16 12:26:55.700075] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.232 [2024-12-16 12:26:55.700230] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.232 [2024-12-16 12:26:55.700369] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.232 [2024-12-16 12:26:55.700718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.232 [2024-12-16 12:26:55.700746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.232 [2024-12-16 12:26:55.700868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.232 [2024-12-16 12:26:55.700885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.232 [2024-12-16 12:26:55.701007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.232 [2024-12-16 12:26:55.701024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.232 #9 NEW cov: 12405 ft: 14089 corp: 8/169b lim: 30 exec/s: 0 rss: 72Mb L: 23/28 MS: 1 ShuffleBytes- 00:06:50.232 [2024-12-16 12:26:55.770366] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.232 [2024-12-16 12:26:55.770515] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.232 [2024-12-16 12:26:55.770672] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.232 [2024-12-16 12:26:55.770819] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.232 [2024-12-16 12:26:55.771143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.232 [2024-12-16 12:26:55.771173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.232 [2024-12-16 12:26:55.771295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.232 [2024-12-16 12:26:55.771316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.232 [2024-12-16 12:26:55.771441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.232 [2024-12-16 12:26:55.771461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.232 [2024-12-16 12:26:55.771573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.232 [2024-12-16 12:26:55.771590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.492 #10 NEW cov: 12405 ft: 14119 corp: 9/197b lim: 30 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 CopyPart- 00:06:50.492 [2024-12-16 12:26:55.840727] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.492 [2024-12-16 12:26:55.840899] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.492 [2024-12-16 12:26:55.841048] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.492 [2024-12-16 12:26:55.841187] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.492 [2024-12-16 12:26:55.841340] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff0a 00:06:50.492 [2024-12-16 12:26:55.841677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.492 [2024-12-16 12:26:55.841707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.492 [2024-12-16 12:26:55.841845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.492 [2024-12-16 12:26:55.841865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.492 [2024-12-16 12:26:55.841990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.492 [2024-12-16 12:26:55.842009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.492 [2024-12-16 12:26:55.842138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.492 [2024-12-16 12:26:55.842156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.492 [2024-12-16 12:26:55.842272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.492 [2024-12-16 12:26:55.842290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.492 #11 NEW cov: 12405 ft: 14181 corp: 10/227b lim: 30 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:06:50.492 [2024-12-16 12:26:55.910834] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.492 [2024-12-16 12:26:55.910985] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.492 [2024-12-16 12:26:55.911145] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.492 [2024-12-16 12:26:55.911289] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000058ff 00:06:50.492 [2024-12-16 12:26:55.911624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.492 [2024-12-16 12:26:55.911652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.492 [2024-12-16 12:26:55.911770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.492 [2024-12-16 12:26:55.911788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.492 [2024-12-16 12:26:55.911919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.492 [2024-12-16 12:26:55.911939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.492 [2024-12-16 12:26:55.912054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.492 [2024-12-16 12:26:55.912073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.492 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:06:50.492 #12 NEW cov: 12428 ft: 14276 corp: 11/255b lim: 30 exec/s: 0 rss: 72Mb L: 28/30 MS: 1 ChangeByte- 00:06:50.492 [2024-12-16 12:26:55.960973] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.492 [2024-12-16 12:26:55.961123] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.492 [2024-12-16 12:26:55.961273] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.492 [2024-12-16 12:26:55.961415] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff0a 00:06:50.492 [2024-12-16 12:26:55.961779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.492 [2024-12-16 12:26:55.961808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.492 [2024-12-16 12:26:55.961924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.492 [2024-12-16 12:26:55.961943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.492 [2024-12-16 12:26:55.962064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.492 [2024-12-16 12:26:55.962083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.492 [2024-12-16 12:26:55.962201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.492 [2024-12-16 12:26:55.962219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.492 #13 NEW cov: 12428 ft: 14343 corp: 12/282b lim: 30 exec/s: 0 rss: 73Mb L: 27/30 MS: 1 CrossOver- 00:06:50.492 [2024-12-16 12:26:56.031110] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.492 [2024-12-16 12:26:56.031271] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.492 [2024-12-16 12:26:56.031414] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000058ff 00:06:50.492 [2024-12-16 12:26:56.031754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.492 [2024-12-16 12:26:56.031783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.492 [2024-12-16 12:26:56.031907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.492 [2024-12-16 12:26:56.031928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.492 [2024-12-16 12:26:56.032052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.492 [2024-12-16 12:26:56.032071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.751 #14 NEW cov: 12428 ft: 14377 corp: 13/304b lim: 30 exec/s: 14 rss: 73Mb L: 22/30 MS: 1 EraseBytes- 00:06:50.751 [2024-12-16 12:26:56.101090] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.751 [2024-12-16 12:26:56.101418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.751 [2024-12-16 12:26:56.101447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.751 #15 NEW cov: 12428 ft: 14759 corp: 14/311b lim: 30 exec/s: 15 rss: 73Mb L: 7/30 MS: 1 CrossOver- 00:06:50.751 [2024-12-16 12:26:56.141302] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.751 [2024-12-16 12:26:56.141464] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.751 [2024-12-16 12:26:56.141608] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.751 [2024-12-16 12:26:56.141951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.752 [2024-12-16 12:26:56.141980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.752 [2024-12-16 12:26:56.142106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.752 [2024-12-16 12:26:56.142125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.752 [2024-12-16 12:26:56.142252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.752 [2024-12-16 12:26:56.142269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.752 #16 NEW cov: 12428 ft: 14847 corp: 15/331b lim: 30 exec/s: 16 rss: 73Mb L: 20/30 MS: 1 ChangeBit- 00:06:50.752 [2024-12-16 12:26:56.181493] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.752 [2024-12-16 12:26:56.181653] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000030ff 00:06:50.752 [2024-12-16 12:26:56.181823] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.752 [2024-12-16 12:26:56.181970] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff0a 00:06:50.752 [2024-12-16 12:26:56.182302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.752 [2024-12-16 12:26:56.182331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.752 [2024-12-16 12:26:56.182457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.752 [2024-12-16 12:26:56.182476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.752 [2024-12-16 12:26:56.182601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.752 [2024-12-16 12:26:56.182623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.752 [2024-12-16 12:26:56.182749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.752 [2024-12-16 12:26:56.182767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.752 #17 NEW cov: 12428 ft: 14931 corp: 16/355b lim: 30 exec/s: 17 rss: 73Mb L: 24/30 MS: 1 InsertByte- 00:06:50.752 [2024-12-16 12:26:56.231698] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.752 [2024-12-16 12:26:56.231851] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.752 [2024-12-16 12:26:56.231999] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.752 [2024-12-16 12:26:56.232145] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000058ff 00:06:50.752 [2024-12-16 12:26:56.232495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.752 [2024-12-16 12:26:56.232527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.752 [2024-12-16 12:26:56.232641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.752 [2024-12-16 12:26:56.232660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.752 [2024-12-16 12:26:56.232777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.752 [2024-12-16 12:26:56.232794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.752 [2024-12-16 12:26:56.232911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.752 [2024-12-16 12:26:56.232928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.752 #18 NEW cov: 12428 ft: 14957 corp: 17/383b lim: 30 exec/s: 18 rss: 73Mb L: 28/30 MS: 1 ChangeByte- 00:06:50.752 [2024-12-16 12:26:56.281875] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.752 [2024-12-16 12:26:56.282029] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.752 [2024-12-16 12:26:56.282171] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.752 [2024-12-16 12:26:56.282313] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.752 [2024-12-16 12:26:56.282670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.752 [2024-12-16 12:26:56.282699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.752 [2024-12-16 12:26:56.282827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.752 [2024-12-16 12:26:56.282846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.752 [2024-12-16 12:26:56.282964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.752 [2024-12-16 12:26:56.282980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.752 [2024-12-16 12:26:56.283097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.752 [2024-12-16 12:26:56.283115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.752 #19 NEW cov: 12428 ft: 14967 corp: 18/408b lim: 30 exec/s: 19 rss: 73Mb L: 25/30 MS: 1 CopyPart- 00:06:51.011 [2024-12-16 12:26:56.331762] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.011 [2024-12-16 12:26:56.332095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.011 [2024-12-16 12:26:56.332124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.011 #20 NEW cov: 12428 ft: 14980 corp: 19/415b lim: 30 exec/s: 20 rss: 73Mb L: 7/30 MS: 1 ChangeByte- 00:06:51.011 [2024-12-16 12:26:56.402256] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:51.011 [2024-12-16 12:26:56.402402] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.011 [2024-12-16 12:26:56.402558] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.011 [2024-12-16 12:26:56.402707] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.011 [2024-12-16 12:26:56.403030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.011 [2024-12-16 12:26:56.403058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.011 [2024-12-16 12:26:56.403186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.011 [2024-12-16 12:26:56.403205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.011 [2024-12-16 12:26:56.403329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.011 [2024-12-16 12:26:56.403346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.011 [2024-12-16 12:26:56.403476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.011 [2024-12-16 12:26:56.403493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.011 #21 NEW cov: 12451 ft: 15009 corp: 20/443b lim: 30 exec/s: 21 rss: 73Mb L: 28/30 MS: 1 ChangeBinInt- 00:06:51.011 [2024-12-16 12:26:56.472487] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.011 [2024-12-16 12:26:56.472644] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:06:51.011 [2024-12-16 12:26:56.472795] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.011 [2024-12-16 12:26:56.472942] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.011 [2024-12-16 12:26:56.473268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.011 [2024-12-16 12:26:56.473296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.011 [2024-12-16 12:26:56.473410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.011 [2024-12-16 12:26:56.473428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.011 [2024-12-16 12:26:56.473562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a7ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.011 [2024-12-16 12:26:56.473579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.011 [2024-12-16 12:26:56.473701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.011 [2024-12-16 12:26:56.473719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.011 #22 NEW cov: 12451 ft: 15053 corp: 21/471b lim: 30 exec/s: 22 rss: 73Mb L: 28/30 MS: 1 ChangeByte- 00:06:51.011 [2024-12-16 12:26:56.522529] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.011 [2024-12-16 12:26:56.522692] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.011 [2024-12-16 12:26:56.522829] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.011 [2024-12-16 12:26:56.523149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.011 [2024-12-16 12:26:56.523181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.011 [2024-12-16 12:26:56.523305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.011 [2024-12-16 12:26:56.523324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.011 [2024-12-16 12:26:56.523445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.011 [2024-12-16 12:26:56.523464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.011 #23 NEW cov: 12451 ft: 15062 corp: 22/494b lim: 30 exec/s: 23 rss: 73Mb L: 23/30 MS: 1 ShuffleBytes- 00:06:51.011 [2024-12-16 12:26:56.572456] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.011 [2024-12-16 12:26:56.572788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.011 [2024-12-16 12:26:56.572831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.271 #24 NEW cov: 12451 ft: 15089 corp: 23/503b lim: 30 exec/s: 24 rss: 73Mb L: 9/30 MS: 1 CrossOver- 00:06:51.271 [2024-12-16 12:26:56.642940] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.271 [2024-12-16 12:26:56.643101] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.271 [2024-12-16 12:26:56.643244] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.271 [2024-12-16 12:26:56.643379] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.271 [2024-12-16 12:26:56.643697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.271 [2024-12-16 12:26:56.643727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.271 [2024-12-16 12:26:56.643854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.271 [2024-12-16 12:26:56.643875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.271 [2024-12-16 12:26:56.643994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a7ff8328 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.271 [2024-12-16 12:26:56.644013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.271 [2024-12-16 12:26:56.644138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.271 [2024-12-16 12:26:56.644157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.271 #25 NEW cov: 12451 ft: 15096 corp: 24/531b lim: 30 exec/s: 25 rss: 73Mb L: 28/30 MS: 1 ChangeByte- 00:06:51.271 [2024-12-16 12:26:56.682979] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e8ff 00:06:51.271 [2024-12-16 12:26:56.683136] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.271 [2024-12-16 12:26:56.683276] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.271 [2024-12-16 12:26:56.683423] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.271 [2024-12-16 12:26:56.683759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.271 [2024-12-16 12:26:56.683790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.271 [2024-12-16 12:26:56.683913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.271 [2024-12-16 12:26:56.683929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.271 [2024-12-16 12:26:56.684062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a7ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.271 [2024-12-16 12:26:56.684080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.271 [2024-12-16 12:26:56.684207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.271 [2024-12-16 12:26:56.684225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.271 #26 NEW cov: 12451 ft: 15109 corp: 25/559b lim: 30 exec/s: 26 rss: 73Mb L: 28/30 MS: 1 ShuffleBytes- 00:06:51.271 [2024-12-16 12:26:56.753165] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.271 [2024-12-16 12:26:56.753314] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:06:51.271 [2024-12-16 12:26:56.753455] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.271 [2024-12-16 12:26:56.753592] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.271 [2024-12-16 12:26:56.753938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.271 [2024-12-16 12:26:56.753967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.271 [2024-12-16 12:26:56.754091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.271 [2024-12-16 12:26:56.754108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.271 [2024-12-16 12:26:56.754230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a7ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.271 [2024-12-16 12:26:56.754248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.271 [2024-12-16 12:26:56.754372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.271 [2024-12-16 12:26:56.754390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.271 #27 NEW cov: 12451 ft: 15141 corp: 26/587b lim: 30 exec/s: 27 rss: 73Mb L: 28/30 MS: 1 ShuffleBytes- 00:06:51.271 [2024-12-16 12:26:56.793282] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.271 [2024-12-16 12:26:56.793427] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:51.271 [2024-12-16 12:26:56.793563] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.271 [2024-12-16 12:26:56.793742] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.271 [2024-12-16 12:26:56.794048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.271 [2024-12-16 12:26:56.794075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.271 [2024-12-16 12:26:56.794196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.271 [2024-12-16 12:26:56.794215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.271 [2024-12-16 12:26:56.794333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.271 [2024-12-16 12:26:56.794351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.271 [2024-12-16 12:26:56.794469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.271 [2024-12-16 12:26:56.794487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.271 #28 NEW cov: 12451 ft: 15152 corp: 27/612b lim: 30 exec/s: 28 rss: 73Mb L: 25/30 MS: 1 EraseBytes- 00:06:51.531 [2024-12-16 12:26:56.853381] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.531 [2024-12-16 12:26:56.853544] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.531 [2024-12-16 12:26:56.853691] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.531 [2024-12-16 12:26:56.853997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.531 [2024-12-16 12:26:56.854026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.531 [2024-12-16 12:26:56.854155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.531 [2024-12-16 12:26:56.854175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.531 [2024-12-16 12:26:56.854297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.531 [2024-12-16 12:26:56.854313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.531 #29 NEW cov: 12451 ft: 15228 corp: 28/635b lim: 30 exec/s: 29 rss: 73Mb L: 23/30 MS: 1 ChangeBit- 00:06:51.531 [2024-12-16 12:26:56.903504] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000004ff 00:06:51.531 [2024-12-16 12:26:56.903672] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.531 [2024-12-16 12:26:56.903818] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.531 [2024-12-16 12:26:56.904150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.531 [2024-12-16 12:26:56.904180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.532 [2024-12-16 12:26:56.904300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.532 [2024-12-16 12:26:56.904317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.532 [2024-12-16 12:26:56.904436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.532 [2024-12-16 12:26:56.904454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.532 #30 NEW cov: 12451 ft: 15256 corp: 29/658b lim: 30 exec/s: 30 rss: 73Mb L: 23/30 MS: 1 ChangeBinInt- 00:06:51.532 [2024-12-16 12:26:56.943635] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000004ff 00:06:51.532 [2024-12-16 12:26:56.943793] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.532 [2024-12-16 12:26:56.943935] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.532 [2024-12-16 12:26:56.944276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.532 [2024-12-16 12:26:56.944306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.532 [2024-12-16 12:26:56.944438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff836c cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.532 [2024-12-16 12:26:56.944456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.532 [2024-12-16 12:26:56.944575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.532 [2024-12-16 12:26:56.944596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.532 #31 NEW cov: 12451 ft: 15270 corp: 30/681b lim: 30 exec/s: 31 rss: 73Mb L: 23/30 MS: 1 ChangeByte- 00:06:51.532 [2024-12-16 12:26:57.003951] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.532 [2024-12-16 12:26:57.004094] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:51.532 [2024-12-16 12:26:57.004254] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.532 [2024-12-16 12:26:57.004402] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.532 [2024-12-16 12:26:57.004727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.532 [2024-12-16 12:26:57.004755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.532 [2024-12-16 12:26:57.004880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.532 [2024-12-16 12:26:57.004897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.532 [2024-12-16 12:26:57.005022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.532 [2024-12-16 12:26:57.005040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.532 [2024-12-16 12:26:57.005173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83f8 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.532 [2024-12-16 12:26:57.005191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.532 #32 pulse cov: 12451 ft: 15283 corp: 30/681b lim: 30 exec/s: 16 rss: 73Mb 00:06:51.532 #32 NEW cov: 12451 ft: 15283 corp: 31/706b lim: 30 exec/s: 16 rss: 73Mb L: 25/30 MS: 1 ChangeBinInt- 00:06:51.532 #32 DONE cov: 12451 ft: 15283 corp: 31/706b lim: 30 exec/s: 16 rss: 73Mb 00:06:51.532 Done 32 runs in 2 second(s) 00:06:51.791 12:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:51.791 12:26:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:51.791 12:26:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:51.791 12:26:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:51.791 12:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:51.791 12:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:51.791 12:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:51.791 12:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:51.791 12:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:51.791 12:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:51.792 12:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:51.792 12:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:51.792 12:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:51.792 12:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:51.792 12:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:51.792 12:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:51.792 12:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:51.792 12:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:51.792 12:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:51.792 [2024-12-16 12:26:57.199066] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:51.792 [2024-12-16 12:26:57.199132] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid984136 ] 00:06:52.051 [2024-12-16 12:26:57.382795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.051 [2024-12-16 12:26:57.416484] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.051 [2024-12-16 12:26:57.475336] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.051 [2024-12-16 12:26:57.491657] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:52.051 INFO: Running with entropic power schedule (0xFF, 100). 00:06:52.051 INFO: Seed: 1582351688 00:06:52.051 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:06:52.051 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:06:52.051 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:52.051 INFO: A corpus is not provided, starting from an empty corpus 00:06:52.051 #2 INITED exec/s: 0 rss: 65Mb 00:06:52.051 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:52.051 This may also happen if the target rejected all inputs we tried so far 00:06:52.051 [2024-12-16 12:26:57.567817] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:52.051 [2024-12-16 12:26:57.567977] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:52.051 [2024-12-16 12:26:57.568347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.051 [2024-12-16 12:26:57.568387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.051 [2024-12-16 12:26:57.568508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.051 [2024-12-16 12:26:57.568532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.051 [2024-12-16 12:26:57.568668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.051 [2024-12-16 12:26:57.568691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.620 NEW_FUNC[1/716]: 0x43ef98 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:52.620 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:52.620 #9 NEW cov: 12149 ft: 12152 corp: 2/22b lim: 35 exec/s: 0 rss: 72Mb L: 21/21 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:52.620 [2024-12-16 12:26:57.908697] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:52.620 [2024-12-16 12:26:57.908858] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:52.620 [2024-12-16 12:26:57.909016] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:52.620 [2024-12-16 12:26:57.909367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.620 [2024-12-16 12:26:57.909407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.620 [2024-12-16 12:26:57.909532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.620 [2024-12-16 12:26:57.909559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.620 [2024-12-16 12:26:57.909691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.620 [2024-12-16 12:26:57.909715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.620 [2024-12-16 12:26:57.909833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.620 [2024-12-16 12:26:57.909856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.620 #10 NEW cov: 12281 ft: 13372 corp: 3/55b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 CopyPart- 00:06:52.620 [2024-12-16 12:26:57.968680] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:52.620 [2024-12-16 12:26:57.969152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.620 [2024-12-16 12:26:57.969180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.620 [2024-12-16 12:26:57.969303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.620 [2024-12-16 12:26:57.969323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.620 [2024-12-16 12:26:57.969443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000020 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.620 [2024-12-16 12:26:57.969460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.620 #11 NEW cov: 12287 ft: 13681 corp: 4/78b lim: 35 exec/s: 0 rss: 72Mb L: 23/33 MS: 1 CMP- DE: " \000"- 00:06:52.620 [2024-12-16 12:26:58.009175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.620 [2024-12-16 12:26:58.009205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.620 [2024-12-16 12:26:58.009326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000026 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.620 [2024-12-16 12:26:58.009343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.620 [2024-12-16 12:26:58.009460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000020 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.620 [2024-12-16 12:26:58.009478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.620 #12 NEW cov: 12372 ft: 13957 corp: 5/101b lim: 35 exec/s: 0 rss: 72Mb L: 23/33 MS: 1 ChangeByte- 00:06:52.620 [2024-12-16 12:26:58.068946] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:52.620 [2024-12-16 12:26:58.069104] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:52.620 [2024-12-16 12:26:58.069422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.620 [2024-12-16 12:26:58.069452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.620 [2024-12-16 12:26:58.069568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.621 [2024-12-16 12:26:58.069590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.621 [2024-12-16 12:26:58.069725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.621 [2024-12-16 12:26:58.069747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.621 #13 NEW cov: 12372 ft: 14163 corp: 6/122b lim: 35 exec/s: 0 rss: 72Mb L: 21/33 MS: 1 CopyPart- 00:06:52.621 [2024-12-16 12:26:58.108899] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:52.621 [2024-12-16 12:26:58.109233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.621 [2024-12-16 12:26:58.109262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.621 [2024-12-16 12:26:58.109369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.621 [2024-12-16 12:26:58.109391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.621 #14 NEW cov: 12372 ft: 14431 corp: 7/142b lim: 35 exec/s: 0 rss: 72Mb L: 20/33 MS: 1 EraseBytes- 00:06:52.621 [2024-12-16 12:26:58.169394] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:52.621 [2024-12-16 12:26:58.169737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.621 [2024-12-16 12:26:58.169766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.621 [2024-12-16 12:26:58.169884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000026 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.621 [2024-12-16 12:26:58.169902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.621 [2024-12-16 12:26:58.170019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:20000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.621 [2024-12-16 12:26:58.170040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.880 #15 NEW cov: 12372 ft: 14470 corp: 8/166b lim: 35 exec/s: 0 rss: 72Mb L: 24/33 MS: 1 InsertByte- 00:06:52.880 [2024-12-16 12:26:58.229625] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:52.880 [2024-12-16 12:26:58.229786] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:52.880 [2024-12-16 12:26:58.230105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:9c00009c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.880 [2024-12-16 12:26:58.230133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.880 [2024-12-16 12:26:58.230247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:9c9c009c cdw11:9c009c9c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.880 [2024-12-16 12:26:58.230266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.880 [2024-12-16 12:26:58.230375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00260000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.880 [2024-12-16 12:26:58.230396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.881 [2024-12-16 12:26:58.230513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:3d000000 cdw11:00002000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.881 [2024-12-16 12:26:58.230533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.881 #16 NEW cov: 12372 ft: 14517 corp: 9/199b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:06:52.881 [2024-12-16 12:26:58.289716] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:52.881 [2024-12-16 12:26:58.290044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00250009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.881 [2024-12-16 12:26:58.290074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.881 [2024-12-16 12:26:58.290201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000026 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.881 [2024-12-16 12:26:58.290218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.881 [2024-12-16 12:26:58.290342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:20000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.881 [2024-12-16 12:26:58.290371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.881 #17 NEW cov: 12372 ft: 14577 corp: 10/223b lim: 35 exec/s: 0 rss: 73Mb L: 24/33 MS: 1 ChangeByte- 00:06:52.881 [2024-12-16 12:26:58.339746] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:52.881 [2024-12-16 12:26:58.339912] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:52.881 [2024-12-16 12:26:58.340236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.881 [2024-12-16 12:26:58.340264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.881 [2024-12-16 12:26:58.340384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.881 [2024-12-16 12:26:58.340405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.881 [2024-12-16 12:26:58.340518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.881 [2024-12-16 12:26:58.340539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.881 #18 NEW cov: 12372 ft: 14645 corp: 11/244b lim: 35 exec/s: 0 rss: 73Mb L: 21/33 MS: 1 ChangeByte- 00:06:52.881 [2024-12-16 12:26:58.380310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00002a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.881 [2024-12-16 12:26:58.380337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.881 [2024-12-16 12:26:58.380455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000026 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.881 [2024-12-16 12:26:58.380473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.881 [2024-12-16 12:26:58.380589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000020 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.881 [2024-12-16 12:26:58.380606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.881 #19 NEW cov: 12372 ft: 14692 corp: 12/267b lim: 35 exec/s: 0 rss: 73Mb L: 23/33 MS: 1 ChangeByte- 00:06:52.881 [2024-12-16 12:26:58.419867] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:52.881 [2024-12-16 12:26:58.420217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.881 [2024-12-16 12:26:58.420248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.881 [2024-12-16 12:26:58.420365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.881 [2024-12-16 12:26:58.420383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.141 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:06:53.141 #20 NEW cov: 12395 ft: 14733 corp: 13/287b lim: 35 exec/s: 0 rss: 73Mb L: 20/33 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\020"- 00:06:53.141 [2024-12-16 12:26:58.490662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.141 [2024-12-16 12:26:58.490688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.141 [2024-12-16 12:26:58.490806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000026 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.141 [2024-12-16 12:26:58.490825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.141 [2024-12-16 12:26:58.490943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000020 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.141 [2024-12-16 12:26:58.490959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.141 #21 NEW cov: 12395 ft: 14743 corp: 14/310b lim: 35 exec/s: 0 rss: 73Mb L: 23/33 MS: 1 CopyPart- 00:06:53.141 [2024-12-16 12:26:58.540439] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.141 [2024-12-16 12:26:58.540774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.141 [2024-12-16 12:26:58.540805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.141 [2024-12-16 12:26:58.540926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000026 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.141 [2024-12-16 12:26:58.540942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.141 [2024-12-16 12:26:58.541066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:20000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.141 [2024-12-16 12:26:58.541086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.141 #22 NEW cov: 12395 ft: 14800 corp: 15/334b lim: 35 exec/s: 22 rss: 73Mb L: 24/33 MS: 1 ChangeByte- 00:06:53.141 [2024-12-16 12:26:58.590589] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.141 [2024-12-16 12:26:58.590760] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.141 [2024-12-16 12:26:58.590916] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.141 [2024-12-16 12:26:58.591262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.141 [2024-12-16 12:26:58.591294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.141 [2024-12-16 12:26:58.591410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.141 [2024-12-16 12:26:58.591435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.141 [2024-12-16 12:26:58.591560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.141 [2024-12-16 12:26:58.591585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.141 [2024-12-16 12:26:58.591700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.141 [2024-12-16 12:26:58.591722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.141 #23 NEW cov: 12395 ft: 14835 corp: 16/367b lim: 35 exec/s: 23 rss: 73Mb L: 33/33 MS: 1 ChangeBit- 00:06:53.141 [2024-12-16 12:26:58.650604] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.141 [2024-12-16 12:26:58.650772] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.141 [2024-12-16 12:26:58.651094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.141 [2024-12-16 12:26:58.651124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.141 [2024-12-16 12:26:58.651250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.141 [2024-12-16 12:26:58.651270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.141 [2024-12-16 12:26:58.651395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:15000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.141 [2024-12-16 12:26:58.651417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.141 #24 NEW cov: 12395 ft: 14843 corp: 17/388b lim: 35 exec/s: 24 rss: 73Mb L: 21/33 MS: 1 ChangeBinInt- 00:06:53.141 [2024-12-16 12:26:58.690682] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.141 [2024-12-16 12:26:58.690840] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.141 [2024-12-16 12:26:58.691157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.141 [2024-12-16 12:26:58.691187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.141 [2024-12-16 12:26:58.691303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:005b0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.141 [2024-12-16 12:26:58.691327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.141 [2024-12-16 12:26:58.691451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.141 [2024-12-16 12:26:58.691473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.401 #25 NEW cov: 12395 ft: 14977 corp: 18/409b lim: 35 exec/s: 25 rss: 73Mb L: 21/33 MS: 1 ChangeByte- 00:06:53.401 [2024-12-16 12:26:58.731274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.401 [2024-12-16 12:26:58.731303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.401 [2024-12-16 12:26:58.731425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000026 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.401 [2024-12-16 12:26:58.731442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.401 [2024-12-16 12:26:58.731565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000020 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.401 [2024-12-16 12:26:58.731584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.401 #26 NEW cov: 12395 ft: 14993 corp: 19/432b lim: 35 exec/s: 26 rss: 73Mb L: 23/33 MS: 1 ChangeBit- 00:06:53.401 [2024-12-16 12:26:58.781082] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.401 [2024-12-16 12:26:58.781692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.401 [2024-12-16 12:26:58.781730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.401 [2024-12-16 12:26:58.781860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:005b0000 cdw11:bb0000bb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.401 [2024-12-16 12:26:58.781883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.401 [2024-12-16 12:26:58.782003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:bbbb00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.401 [2024-12-16 12:26:58.782022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.401 [2024-12-16 12:26:58.782144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:000000bb cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.401 [2024-12-16 12:26:58.782166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.401 #27 NEW cov: 12395 ft: 15023 corp: 20/464b lim: 35 exec/s: 27 rss: 73Mb L: 32/33 MS: 1 InsertRepeatedBytes- 00:06:53.401 [2024-12-16 12:26:58.851667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.401 [2024-12-16 12:26:58.851696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.401 [2024-12-16 12:26:58.851814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2b000026 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.401 [2024-12-16 12:26:58.851831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.401 [2024-12-16 12:26:58.851949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000020 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.401 [2024-12-16 12:26:58.851967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.401 #28 NEW cov: 12395 ft: 15036 corp: 21/487b lim: 35 exec/s: 28 rss: 73Mb L: 23/33 MS: 1 ChangeByte- 00:06:53.401 [2024-12-16 12:26:58.921884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.401 [2024-12-16 12:26:58.921914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.401 [2024-12-16 12:26:58.922042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2b000026 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.401 [2024-12-16 12:26:58.922060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.401 [2024-12-16 12:26:58.922177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000020 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.401 [2024-12-16 12:26:58.922193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.401 #29 NEW cov: 12395 ft: 15084 corp: 22/510b lim: 35 exec/s: 29 rss: 73Mb L: 23/33 MS: 1 ChangeBinInt- 00:06:53.661 [2024-12-16 12:26:58.991677] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.661 [2024-12-16 12:26:58.991846] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.661 [2024-12-16 12:26:58.992170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:20000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.661 [2024-12-16 12:26:58.992200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.661 [2024-12-16 12:26:58.992327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00260000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.661 [2024-12-16 12:26:58.992347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.661 [2024-12-16 12:26:58.992467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3d000000 cdw11:00002000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.661 [2024-12-16 12:26:58.992490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.661 #30 NEW cov: 12395 ft: 15120 corp: 23/536b lim: 35 exec/s: 30 rss: 73Mb L: 26/33 MS: 1 PersAutoDict- DE: " \000"- 00:06:53.661 [2024-12-16 12:26:59.052022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.661 [2024-12-16 12:26:59.052054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.661 [2024-12-16 12:26:59.052176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000026 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.661 [2024-12-16 12:26:59.052195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.661 #31 NEW cov: 12395 ft: 15138 corp: 24/552b lim: 35 exec/s: 31 rss: 73Mb L: 16/33 MS: 1 EraseBytes- 00:06:53.661 [2024-12-16 12:26:59.091781] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.661 [2024-12-16 12:26:59.091934] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.661 [2024-12-16 12:26:59.092095] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.661 [2024-12-16 12:26:59.092420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.661 [2024-12-16 12:26:59.092455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.661 [2024-12-16 12:26:59.092573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.661 [2024-12-16 12:26:59.092594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.661 [2024-12-16 12:26:59.092722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.661 [2024-12-16 12:26:59.092744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.661 #32 NEW cov: 12395 ft: 15165 corp: 25/573b lim: 35 exec/s: 32 rss: 73Mb L: 21/33 MS: 1 CopyPart- 00:06:53.661 [2024-12-16 12:26:59.142053] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.661 [2024-12-16 12:26:59.142214] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.661 [2024-12-16 12:26:59.142362] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.661 [2024-12-16 12:26:59.142691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.661 [2024-12-16 12:26:59.142719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.661 [2024-12-16 12:26:59.142842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.661 [2024-12-16 12:26:59.142868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.661 [2024-12-16 12:26:59.142997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:10000000 cdw11:00005b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.661 [2024-12-16 12:26:59.143021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.661 [2024-12-16 12:26:59.143137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.661 [2024-12-16 12:26:59.143158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.661 #33 NEW cov: 12395 ft: 15183 corp: 26/602b lim: 35 exec/s: 33 rss: 73Mb L: 29/33 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\020"- 00:06:53.661 [2024-12-16 12:26:59.182270] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.661 [2024-12-16 12:26:59.182429] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.661 [2024-12-16 12:26:59.182773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.661 [2024-12-16 12:26:59.182801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.661 [2024-12-16 12:26:59.182919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000026 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.661 [2024-12-16 12:26:59.182939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.661 [2024-12-16 12:26:59.183060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:20000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.661 [2024-12-16 12:26:59.183084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.661 [2024-12-16 12:26:59.183205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:20000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.661 [2024-12-16 12:26:59.183225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.661 #34 NEW cov: 12395 ft: 15199 corp: 27/633b lim: 35 exec/s: 34 rss: 73Mb L: 31/33 MS: 1 CopyPart- 00:06:53.661 [2024-12-16 12:26:59.222536] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.661 [2024-12-16 12:26:59.222926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.661 [2024-12-16 12:26:59.222955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.661 [2024-12-16 12:26:59.223073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000026 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.661 [2024-12-16 12:26:59.223090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.661 [2024-12-16 12:26:59.223206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000020 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.661 [2024-12-16 12:26:59.223223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.661 [2024-12-16 12:26:59.223347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.661 [2024-12-16 12:26:59.223369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.921 #35 NEW cov: 12395 ft: 15203 corp: 28/661b lim: 35 exec/s: 35 rss: 73Mb L: 28/33 MS: 1 InsertRepeatedBytes- 00:06:53.921 [2024-12-16 12:26:59.262426] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.921 [2024-12-16 12:26:59.262593] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.921 [2024-12-16 12:26:59.262779] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.921 [2024-12-16 12:26:59.263175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.921 [2024-12-16 12:26:59.263202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.921 [2024-12-16 12:26:59.263324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.921 [2024-12-16 12:26:59.263353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.921 [2024-12-16 12:26:59.263477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.921 [2024-12-16 12:26:59.263498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.921 [2024-12-16 12:26:59.263614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.921 [2024-12-16 12:26:59.263637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.921 #36 NEW cov: 12395 ft: 15241 corp: 29/694b lim: 35 exec/s: 36 rss: 73Mb L: 33/33 MS: 1 ShuffleBytes- 00:06:53.921 [2024-12-16 12:26:59.333292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.921 [2024-12-16 12:26:59.333320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.921 [2024-12-16 12:26:59.333438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000026 cdw11:61000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.921 [2024-12-16 12:26:59.333455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.921 [2024-12-16 12:26:59.333575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:61610061 cdw11:61006161 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.921 [2024-12-16 12:26:59.333591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.921 [2024-12-16 12:26:59.333698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:61000061 cdw11:00000020 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.921 [2024-12-16 12:26:59.333716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.921 #37 NEW cov: 12395 ft: 15251 corp: 30/728b lim: 35 exec/s: 37 rss: 74Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:06:53.921 [2024-12-16 12:26:59.372728] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.921 [2024-12-16 12:26:59.373331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.921 [2024-12-16 12:26:59.373359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.921 [2024-12-16 12:26:59.373474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:bb005b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.921 [2024-12-16 12:26:59.373495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.921 [2024-12-16 12:26:59.373616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:bbbb00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.921 [2024-12-16 12:26:59.373633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.921 [2024-12-16 12:26:59.373752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:bb0000bb cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.921 [2024-12-16 12:26:59.373769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.921 #38 NEW cov: 12395 ft: 15268 corp: 31/761b lim: 35 exec/s: 38 rss: 74Mb L: 33/34 MS: 1 CopyPart- 00:06:53.921 [2024-12-16 12:26:59.442896] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.921 [2024-12-16 12:26:59.443058] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.921 [2024-12-16 12:26:59.443384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.921 [2024-12-16 12:26:59.443413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.921 [2024-12-16 12:26:59.443533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.921 [2024-12-16 12:26:59.443557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.921 [2024-12-16 12:26:59.443675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.921 [2024-12-16 12:26:59.443700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.921 #39 NEW cov: 12395 ft: 15276 corp: 32/782b lim: 35 exec/s: 39 rss: 74Mb L: 21/34 MS: 1 ShuffleBytes- 00:06:53.921 [2024-12-16 12:26:59.482967] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.921 [2024-12-16 12:26:59.483592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:09000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.921 [2024-12-16 12:26:59.483627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.921 [2024-12-16 12:26:59.483740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2b000026 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.922 [2024-12-16 12:26:59.483759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.922 [2024-12-16 12:26:59.483881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000020 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.922 [2024-12-16 12:26:59.483899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.181 #40 NEW cov: 12395 ft: 15298 corp: 33/805b lim: 35 exec/s: 40 rss: 74Mb L: 23/34 MS: 1 ShuffleBytes- 00:06:54.181 [2024-12-16 12:26:59.533261] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:54.181 [2024-12-16 12:26:59.533424] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:54.182 [2024-12-16 12:26:59.533573] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:54.182 [2024-12-16 12:26:59.533939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.182 [2024-12-16 12:26:59.533967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.182 [2024-12-16 12:26:59.534096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.182 [2024-12-16 12:26:59.534122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.182 [2024-12-16 12:26:59.534244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.182 [2024-12-16 12:26:59.534266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.182 [2024-12-16 12:26:59.534392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.182 [2024-12-16 12:26:59.534417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.182 #41 NEW cov: 12395 ft: 15362 corp: 34/838b lim: 35 exec/s: 20 rss: 74Mb L: 33/34 MS: 1 CopyPart- 00:06:54.182 #41 DONE cov: 12395 ft: 15362 corp: 34/838b lim: 35 exec/s: 20 rss: 74Mb 00:06:54.182 ###### Recommended dictionary. ###### 00:06:54.182 " \000" # Uses: 1 00:06:54.182 "\000\000\000\000\000\000\000\020" # Uses: 1 00:06:54.182 ###### End of recommended dictionary. ###### 00:06:54.182 Done 41 runs in 2 second(s) 00:06:54.182 12:26:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:54.182 12:26:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:54.182 12:26:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:54.182 12:26:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:54.182 12:26:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:54.182 12:26:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:54.182 12:26:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:54.182 12:26:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:54.182 12:26:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:54.182 12:26:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:54.182 12:26:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:54.182 12:26:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:54.182 12:26:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:54.182 12:26:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:54.182 12:26:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:54.182 12:26:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:54.182 12:26:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:54.182 12:26:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:54.182 12:26:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:54.182 [2024-12-16 12:26:59.697334] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:54.182 [2024-12-16 12:26:59.697385] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid984623 ] 00:06:54.441 [2024-12-16 12:26:59.879496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.441 [2024-12-16 12:26:59.918063] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.441 [2024-12-16 12:26:59.977537] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.441 [2024-12-16 12:26:59.993842] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:54.700 INFO: Running with entropic power schedule (0xFF, 100). 00:06:54.700 INFO: Seed: 4084343721 00:06:54.700 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:06:54.700 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:06:54.700 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:54.700 INFO: A corpus is not provided, starting from an empty corpus 00:06:54.700 #2 INITED exec/s: 0 rss: 66Mb 00:06:54.700 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:54.700 This may also happen if the target rejected all inputs we tried so far 00:06:54.959 NEW_FUNC[1/705]: 0x440c78 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:54.959 NEW_FUNC[2/705]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:54.959 #3 NEW cov: 12053 ft: 12015 corp: 2/6b lim: 20 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CMP- DE: "!\000\000\000"- 00:06:54.959 #15 NEW cov: 12166 ft: 12584 corp: 3/10b lim: 20 exec/s: 0 rss: 72Mb L: 4/5 MS: 2 EraseBytes-CrossOver- 00:06:54.959 #19 NEW cov: 12172 ft: 12924 corp: 4/16b lim: 20 exec/s: 0 rss: 72Mb L: 6/6 MS: 4 ChangeByte-ChangeBinInt-InsertByte-PersAutoDict- DE: "!\000\000\000"- 00:06:55.218 #23 NEW cov: 12257 ft: 13240 corp: 5/21b lim: 20 exec/s: 0 rss: 72Mb L: 5/6 MS: 4 ChangeByte-ChangeByte-ChangeByte-CrossOver- 00:06:55.218 #24 NEW cov: 12257 ft: 13355 corp: 6/27b lim: 20 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 PersAutoDict- DE: "!\000\000\000"- 00:06:55.218 [2024-12-16 12:27:00.651925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.218 [2024-12-16 12:27:00.651972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.218 NEW_FUNC[1/17]: 0x137c5c8 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3485 00:06:55.218 NEW_FUNC[2/17]: 0x137d148 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3427 00:06:55.218 #25 NEW cov: 12517 ft: 13997 corp: 7/37b lim: 20 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 PersAutoDict- DE: "!\000\000\000"- 00:06:55.218 #26 NEW cov: 12517 ft: 14179 corp: 8/46b lim: 20 exec/s: 0 rss: 73Mb L: 9/10 MS: 1 PersAutoDict- DE: "!\000\000\000"- 00:06:55.476 #27 NEW cov: 12517 ft: 14258 corp: 9/53b lim: 20 exec/s: 0 rss: 73Mb L: 7/10 MS: 1 InsertByte- 00:06:55.476 #28 NEW cov: 12517 ft: 14305 corp: 10/59b lim: 20 exec/s: 0 rss: 73Mb L: 6/10 MS: 1 ChangeBit- 00:06:55.476 #29 NEW cov: 12517 ft: 14338 corp: 11/64b lim: 20 exec/s: 0 rss: 73Mb L: 5/10 MS: 1 PersAutoDict- DE: "!\000\000\000"- 00:06:55.476 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:06:55.476 #30 NEW cov: 12540 ft: 14383 corp: 12/70b lim: 20 exec/s: 0 rss: 73Mb L: 6/10 MS: 1 ChangeByte- 00:06:55.476 #31 NEW cov: 12540 ft: 14405 corp: 13/76b lim: 20 exec/s: 0 rss: 73Mb L: 6/10 MS: 1 ChangeBit- 00:06:55.735 #32 NEW cov: 12540 ft: 14429 corp: 14/81b lim: 20 exec/s: 32 rss: 73Mb L: 5/10 MS: 1 CopyPart- 00:06:55.735 #35 NEW cov: 12540 ft: 14456 corp: 15/86b lim: 20 exec/s: 35 rss: 73Mb L: 5/10 MS: 3 EraseBytes-EraseBytes-InsertRepeatedBytes- 00:06:55.735 NEW_FUNC[1/2]: 0x14f47d8 in nvmf_transport_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:819 00:06:55.735 NEW_FUNC[2/2]: 0x151ad38 in nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3687 00:06:55.735 #41 NEW cov: 12596 ft: 14545 corp: 16/94b lim: 20 exec/s: 41 rss: 73Mb L: 8/10 MS: 1 CopyPart- 00:06:55.735 #42 NEW cov: 12596 ft: 14554 corp: 17/99b lim: 20 exec/s: 42 rss: 73Mb L: 5/10 MS: 1 CrossOver- 00:06:55.995 #45 NEW cov: 12613 ft: 14883 corp: 18/116b lim: 20 exec/s: 45 rss: 73Mb L: 17/17 MS: 3 CrossOver-CMP-InsertRepeatedBytes- DE: "\001\000"- 00:06:55.995 #46 NEW cov: 12613 ft: 14900 corp: 19/121b lim: 20 exec/s: 46 rss: 73Mb L: 5/17 MS: 1 CopyPart- 00:06:55.995 #47 NEW cov: 12613 ft: 14922 corp: 20/126b lim: 20 exec/s: 47 rss: 73Mb L: 5/17 MS: 1 PersAutoDict- DE: "\001\000"- 00:06:55.995 #51 NEW cov: 12613 ft: 14940 corp: 21/130b lim: 20 exec/s: 51 rss: 73Mb L: 4/17 MS: 4 ChangeBit-InsertByte-CopyPart-InsertByte- 00:06:56.254 #52 NEW cov: 12613 ft: 14957 corp: 22/137b lim: 20 exec/s: 52 rss: 73Mb L: 7/17 MS: 1 CopyPart- 00:06:56.254 #53 NEW cov: 12613 ft: 14990 corp: 23/143b lim: 20 exec/s: 53 rss: 73Mb L: 6/17 MS: 1 ChangeBit- 00:06:56.254 #54 NEW cov: 12613 ft: 15002 corp: 24/149b lim: 20 exec/s: 54 rss: 73Mb L: 6/17 MS: 1 ShuffleBytes- 00:06:56.254 #55 NEW cov: 12613 ft: 15026 corp: 25/155b lim: 20 exec/s: 55 rss: 73Mb L: 6/17 MS: 1 InsertByte- 00:06:56.254 #56 NEW cov: 12613 ft: 15031 corp: 26/163b lim: 20 exec/s: 56 rss: 74Mb L: 8/17 MS: 1 InsertByte- 00:06:56.513 #57 NEW cov: 12613 ft: 15040 corp: 27/169b lim: 20 exec/s: 57 rss: 74Mb L: 6/17 MS: 1 ChangeBinInt- 00:06:56.513 #58 NEW cov: 12613 ft: 15053 corp: 28/177b lim: 20 exec/s: 58 rss: 74Mb L: 8/17 MS: 1 InsertByte- 00:06:56.513 #59 NEW cov: 12617 ft: 15172 corp: 29/189b lim: 20 exec/s: 59 rss: 74Mb L: 12/17 MS: 1 CopyPart- 00:06:56.513 #60 NEW cov: 12617 ft: 15199 corp: 30/197b lim: 20 exec/s: 30 rss: 74Mb L: 8/17 MS: 1 PersAutoDict- DE: "\001\000"- 00:06:56.513 #60 DONE cov: 12617 ft: 15199 corp: 30/197b lim: 20 exec/s: 30 rss: 74Mb 00:06:56.513 ###### Recommended dictionary. ###### 00:06:56.513 "!\000\000\000" # Uses: 5 00:06:56.513 "\001\000" # Uses: 2 00:06:56.513 ###### End of recommended dictionary. ###### 00:06:56.513 Done 60 runs in 2 second(s) 00:06:56.773 12:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:56.773 12:27:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:56.773 12:27:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:56.773 12:27:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:56.773 12:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:56.773 12:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:56.773 12:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:56.773 12:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:56.773 12:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:56.773 12:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:56.773 12:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:56.773 12:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:56.773 12:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:56.773 12:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:56.773 12:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:56.773 12:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:56.773 12:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:56.773 12:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:56.773 12:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:56.773 [2024-12-16 12:27:02.201591] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:56.773 [2024-12-16 12:27:02.201668] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid985274 ] 00:06:57.033 [2024-12-16 12:27:02.459680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.033 [2024-12-16 12:27:02.516710] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.033 [2024-12-16 12:27:02.575979] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.033 [2024-12-16 12:27:02.592295] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:57.292 INFO: Running with entropic power schedule (0xFF, 100). 00:06:57.292 INFO: Seed: 2388370675 00:06:57.292 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:06:57.292 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:06:57.292 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:57.292 INFO: A corpus is not provided, starting from an empty corpus 00:06:57.292 #2 INITED exec/s: 0 rss: 65Mb 00:06:57.292 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:57.292 This may also happen if the target rejected all inputs we tried so far 00:06:57.292 [2024-12-16 12:27:02.648249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e7e78ae7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.292 [2024-12-16 12:27:02.648278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.292 [2024-12-16 12:27:02.648337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.292 [2024-12-16 12:27:02.648350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.292 [2024-12-16 12:27:02.648403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.292 [2024-12-16 12:27:02.648417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.292 [2024-12-16 12:27:02.648471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.292 [2024-12-16 12:27:02.648484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.552 NEW_FUNC[1/717]: 0x441d78 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:57.552 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:57.552 #9 NEW cov: 12180 ft: 12179 corp: 2/32b lim: 35 exec/s: 0 rss: 72Mb L: 31/31 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:57.552 [2024-12-16 12:27:02.990024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e7e78ae7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.552 [2024-12-16 12:27:02.990065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.552 [2024-12-16 12:27:02.990169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.552 [2024-12-16 12:27:02.990188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.552 [2024-12-16 12:27:02.990308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.552 [2024-12-16 12:27:02.990324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.552 [2024-12-16 12:27:02.990438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ece7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.552 [2024-12-16 12:27:02.990454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.552 #15 NEW cov: 12293 ft: 12959 corp: 3/63b lim: 35 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 ChangeBinInt- 00:06:57.552 [2024-12-16 12:27:03.050114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e7e78ae7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.552 [2024-12-16 12:27:03.050143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.552 [2024-12-16 12:27:03.050278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.552 [2024-12-16 12:27:03.050295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.552 [2024-12-16 12:27:03.050417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.552 [2024-12-16 12:27:03.050434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.552 [2024-12-16 12:27:03.050552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ece767e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.552 [2024-12-16 12:27:03.050570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.552 #16 NEW cov: 12299 ft: 13260 corp: 4/94b lim: 35 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 ChangeBit- 00:06:57.552 [2024-12-16 12:27:03.110072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e7e7e800 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.552 [2024-12-16 12:27:03.110098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.552 [2024-12-16 12:27:03.110225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.552 [2024-12-16 12:27:03.110241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.552 [2024-12-16 12:27:03.110354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.552 [2024-12-16 12:27:03.110369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.811 #21 NEW cov: 12384 ft: 13792 corp: 5/117b lim: 35 exec/s: 0 rss: 72Mb L: 23/31 MS: 5 InsertRepeatedBytes-EraseBytes-EraseBytes-InsertByte-CrossOver- 00:06:57.811 [2024-12-16 12:27:03.149651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:be0a0abe cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.811 [2024-12-16 12:27:03.149677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.811 #25 NEW cov: 12384 ft: 14604 corp: 6/129b lim: 35 exec/s: 0 rss: 72Mb L: 12/31 MS: 4 InsertByte-CopyPart-ShuffleBytes-CMP- DE: "\377\377\377\377\377\377\377\377"- 00:06:57.811 [2024-12-16 12:27:03.190582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e7e78ae7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.811 [2024-12-16 12:27:03.190608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.811 [2024-12-16 12:27:03.190736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.811 [2024-12-16 12:27:03.190754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.811 [2024-12-16 12:27:03.190882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.811 [2024-12-16 12:27:03.190901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.811 [2024-12-16 12:27:03.191024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:141867e7 cdw11:18180003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.811 [2024-12-16 12:27:03.191040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.811 #26 NEW cov: 12384 ft: 14699 corp: 7/160b lim: 35 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 ChangeBinInt- 00:06:57.811 [2024-12-16 12:27:03.249913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.811 [2024-12-16 12:27:03.249942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.812 #27 NEW cov: 12384 ft: 14743 corp: 8/169b lim: 35 exec/s: 0 rss: 72Mb L: 9/31 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:06:57.812 [2024-12-16 12:27:03.290059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:be0a0abe cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.812 [2024-12-16 12:27:03.290086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.812 #33 NEW cov: 12384 ft: 14758 corp: 9/181b lim: 35 exec/s: 0 rss: 72Mb L: 12/31 MS: 1 ChangeByte- 00:06:57.812 [2024-12-16 12:27:03.351073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e7e78ae7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.812 [2024-12-16 12:27:03.351100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.812 [2024-12-16 12:27:03.351220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.812 [2024-12-16 12:27:03.351238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.812 [2024-12-16 12:27:03.351353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.812 [2024-12-16 12:27:03.351371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.812 [2024-12-16 12:27:03.351485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.812 [2024-12-16 12:27:03.351502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.812 #34 NEW cov: 12384 ft: 14781 corp: 10/212b lim: 35 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 ChangeBinInt- 00:06:58.071 [2024-12-16 12:27:03.390303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:be0a0abe cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.071 [2024-12-16 12:27:03.390331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.071 #35 NEW cov: 12384 ft: 14831 corp: 11/224b lim: 35 exec/s: 0 rss: 72Mb L: 12/31 MS: 1 ChangeBit- 00:06:58.071 [2024-12-16 12:27:03.430427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:50505050 cdw11:50500002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.071 [2024-12-16 12:27:03.430454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.071 #37 NEW cov: 12384 ft: 14867 corp: 12/232b lim: 35 exec/s: 0 rss: 72Mb L: 8/31 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:58.071 [2024-12-16 12:27:03.471415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e7e7e800 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.071 [2024-12-16 12:27:03.471445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.071 [2024-12-16 12:27:03.471565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.071 [2024-12-16 12:27:03.471583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.071 [2024-12-16 12:27:03.471708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.071 [2024-12-16 12:27:03.471727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.071 [2024-12-16 12:27:03.471843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e5e5ec0a cdw11:e5e50003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.071 [2024-12-16 12:27:03.471861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.071 #38 NEW cov: 12384 ft: 14877 corp: 13/260b lim: 35 exec/s: 0 rss: 72Mb L: 28/31 MS: 1 InsertRepeatedBytes- 00:06:58.071 [2024-12-16 12:27:03.540747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2b505050 cdw11:50500002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.071 [2024-12-16 12:27:03.540774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.071 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:06:58.071 #39 NEW cov: 12407 ft: 14929 corp: 14/268b lim: 35 exec/s: 0 rss: 73Mb L: 8/31 MS: 1 ChangeByte- 00:06:58.071 [2024-12-16 12:27:03.610986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:be0a0abe cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.071 [2024-12-16 12:27:03.611015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.331 #40 NEW cov: 12407 ft: 14975 corp: 15/280b lim: 35 exec/s: 40 rss: 73Mb L: 12/31 MS: 1 ChangeBinInt- 00:06:58.331 [2024-12-16 12:27:03.681999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e7e78ae7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.331 [2024-12-16 12:27:03.682027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.331 [2024-12-16 12:27:03.682145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.331 [2024-12-16 12:27:03.682162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.331 [2024-12-16 12:27:03.682277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:efe70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.331 [2024-12-16 12:27:03.682293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.331 [2024-12-16 12:27:03.682413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:141867e7 cdw11:18180003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.331 [2024-12-16 12:27:03.682430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.331 #41 NEW cov: 12407 ft: 15025 corp: 16/311b lim: 35 exec/s: 41 rss: 73Mb L: 31/31 MS: 1 ChangeBit- 00:06:58.331 [2024-12-16 12:27:03.741888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e7e78ae7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.331 [2024-12-16 12:27:03.741917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.331 [2024-12-16 12:27:03.742034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.331 [2024-12-16 12:27:03.742051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.331 [2024-12-16 12:27:03.742170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.331 [2024-12-16 12:27:03.742191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.331 #42 NEW cov: 12407 ft: 15046 corp: 17/338b lim: 35 exec/s: 42 rss: 73Mb L: 27/31 MS: 1 EraseBytes- 00:06:58.331 [2024-12-16 12:27:03.781994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.331 [2024-12-16 12:27:03.782020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.331 [2024-12-16 12:27:03.782135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.331 [2024-12-16 12:27:03.782153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.331 [2024-12-16 12:27:03.782269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.331 [2024-12-16 12:27:03.782285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.331 #43 NEW cov: 12407 ft: 15054 corp: 18/365b lim: 35 exec/s: 43 rss: 73Mb L: 27/31 MS: 1 EraseBytes- 00:06:58.331 [2024-12-16 12:27:03.822345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e7e78ae7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.331 [2024-12-16 12:27:03.822371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.331 [2024-12-16 12:27:03.822491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f0e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.331 [2024-12-16 12:27:03.822524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.331 [2024-12-16 12:27:03.822646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.331 [2024-12-16 12:27:03.822664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.331 [2024-12-16 12:27:03.822776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.331 [2024-12-16 12:27:03.822793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.331 #44 NEW cov: 12407 ft: 15078 corp: 19/396b lim: 35 exec/s: 44 rss: 73Mb L: 31/31 MS: 1 ChangeBinInt- 00:06:58.331 [2024-12-16 12:27:03.882512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e7e78ae7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.331 [2024-12-16 12:27:03.882538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.331 [2024-12-16 12:27:03.882661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f0e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.331 [2024-12-16 12:27:03.882680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.331 [2024-12-16 12:27:03.882794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.331 [2024-12-16 12:27:03.882810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.331 [2024-12-16 12:27:03.882927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e7e70800 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.331 [2024-12-16 12:27:03.882947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.591 #45 NEW cov: 12407 ft: 15092 corp: 20/429b lim: 35 exec/s: 45 rss: 73Mb L: 33/33 MS: 1 CMP- DE: "\010\000"- 00:06:58.591 [2024-12-16 12:27:03.941842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:be0a0a27 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.591 [2024-12-16 12:27:03.941868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.591 #46 NEW cov: 12407 ft: 15117 corp: 21/441b lim: 35 exec/s: 46 rss: 73Mb L: 12/33 MS: 1 ChangeByte- 00:06:58.591 [2024-12-16 12:27:04.002089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:be0a0abe cdw11:7eff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.591 [2024-12-16 12:27:04.002117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.591 #47 NEW cov: 12407 ft: 15133 corp: 22/453b lim: 35 exec/s: 47 rss: 73Mb L: 12/33 MS: 1 ChangeByte- 00:06:58.591 [2024-12-16 12:27:04.042411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.591 [2024-12-16 12:27:04.042438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.591 [2024-12-16 12:27:04.042575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.591 [2024-12-16 12:27:04.042593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.591 #49 NEW cov: 12407 ft: 15408 corp: 23/473b lim: 35 exec/s: 49 rss: 73Mb L: 20/33 MS: 2 CopyPart-InsertRepeatedBytes- 00:06:58.591 [2024-12-16 12:27:04.082794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e7e78ae7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.591 [2024-12-16 12:27:04.082822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.591 [2024-12-16 12:27:04.082939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.591 [2024-12-16 12:27:04.082974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.591 [2024-12-16 12:27:04.083096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.591 [2024-12-16 12:27:04.083114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.591 #50 NEW cov: 12407 ft: 15436 corp: 24/497b lim: 35 exec/s: 50 rss: 73Mb L: 24/33 MS: 1 EraseBytes- 00:06:58.591 [2024-12-16 12:27:04.122387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:beff0abe cdw11:01ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.591 [2024-12-16 12:27:04.122413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.850 #51 NEW cov: 12407 ft: 15449 corp: 25/509b lim: 35 exec/s: 51 rss: 73Mb L: 12/33 MS: 1 CopyPart- 00:06:58.850 [2024-12-16 12:27:04.183335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e7e78ae7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.850 [2024-12-16 12:27:04.183364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.851 [2024-12-16 12:27:04.183473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-12-16 12:27:04.183496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.851 [2024-12-16 12:27:04.183614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:efe70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-12-16 12:27:04.183632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.851 [2024-12-16 12:27:04.183746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:141867e7 cdw11:18180003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-12-16 12:27:04.183762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.851 #52 NEW cov: 12407 ft: 15498 corp: 26/540b lim: 35 exec/s: 52 rss: 73Mb L: 31/33 MS: 1 ShuffleBytes- 00:06:58.851 [2024-12-16 12:27:04.242723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:be0a0abe cdw11:7eff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-12-16 12:27:04.242751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.851 #53 NEW cov: 12407 ft: 15503 corp: 27/552b lim: 35 exec/s: 53 rss: 73Mb L: 12/33 MS: 1 ChangeBit- 00:06:58.851 [2024-12-16 12:27:04.302905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:be0a0abe cdw11:7eff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-12-16 12:27:04.302933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.851 #54 NEW cov: 12407 ft: 15507 corp: 28/564b lim: 35 exec/s: 54 rss: 73Mb L: 12/33 MS: 1 CrossOver- 00:06:58.851 [2024-12-16 12:27:04.343555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e7e78ae7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-12-16 12:27:04.343581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.851 [2024-12-16 12:27:04.343716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-12-16 12:27:04.343733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.851 [2024-12-16 12:27:04.343851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e77ae7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-12-16 12:27:04.343867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.851 #55 NEW cov: 12407 ft: 15575 corp: 29/588b lim: 35 exec/s: 55 rss: 74Mb L: 24/33 MS: 1 ChangeByte- 00:06:58.851 [2024-12-16 12:27:04.403581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-12-16 12:27:04.403614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.851 [2024-12-16 12:27:04.403725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:bebeffff cdw11:0aff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.851 [2024-12-16 12:27:04.403743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.111 #56 NEW cov: 12407 ft: 15591 corp: 30/608b lim: 35 exec/s: 56 rss: 74Mb L: 20/33 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:06:59.111 [2024-12-16 12:27:04.444105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e7e7e800 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-12-16 12:27:04.444135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.111 [2024-12-16 12:27:04.444240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-12-16 12:27:04.444256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.111 [2024-12-16 12:27:04.444375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-12-16 12:27:04.444392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.111 [2024-12-16 12:27:04.444508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-12-16 12:27:04.444524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.111 #57 NEW cov: 12407 ft: 15602 corp: 31/642b lim: 35 exec/s: 57 rss: 74Mb L: 34/34 MS: 1 CopyPart- 00:06:59.111 [2024-12-16 12:27:04.483487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:be0a0abe cdw11:7eff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-12-16 12:27:04.483516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.111 #58 NEW cov: 12407 ft: 15611 corp: 32/654b lim: 35 exec/s: 58 rss: 74Mb L: 12/34 MS: 1 ShuffleBytes- 00:06:59.111 [2024-12-16 12:27:04.544469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e7e78ae7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-12-16 12:27:04.544496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.111 [2024-12-16 12:27:04.544616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e7e7e7e7 cdw11:e7310003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-12-16 12:27:04.544634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.111 [2024-12-16 12:27:04.544746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-12-16 12:27:04.544762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.111 [2024-12-16 12:27:04.544877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e7e7e7e7 cdw11:e7e70003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-12-16 12:27:04.544895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.111 #59 NEW cov: 12407 ft: 15640 corp: 33/685b lim: 35 exec/s: 59 rss: 74Mb L: 31/34 MS: 1 ChangeByte- 00:06:59.111 [2024-12-16 12:27:04.583779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:beff0abe cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-12-16 12:27:04.583808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.111 #60 NEW cov: 12407 ft: 15656 corp: 34/695b lim: 35 exec/s: 60 rss: 74Mb L: 10/34 MS: 1 EraseBytes- 00:06:59.111 [2024-12-16 12:27:04.623886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:be0a0a27 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.111 [2024-12-16 12:27:04.623913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.111 #61 NEW cov: 12407 ft: 15680 corp: 35/707b lim: 35 exec/s: 30 rss: 74Mb L: 12/34 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:06:59.111 #61 DONE cov: 12407 ft: 15680 corp: 35/707b lim: 35 exec/s: 30 rss: 74Mb 00:06:59.111 ###### Recommended dictionary. ###### 00:06:59.111 "\377\377\377\377\377\377\377\377" # Uses: 3 00:06:59.111 "\010\000" # Uses: 0 00:06:59.111 ###### End of recommended dictionary. ###### 00:06:59.111 Done 61 runs in 2 second(s) 00:06:59.374 12:27:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:59.374 12:27:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:59.374 12:27:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:59.374 12:27:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:59.374 12:27:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:59.374 12:27:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:59.374 12:27:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:59.374 12:27:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:59.374 12:27:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:59.374 12:27:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:59.374 12:27:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:59.374 12:27:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:59.374 12:27:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:59.374 12:27:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:59.374 12:27:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:59.374 12:27:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:59.374 12:27:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:59.374 12:27:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:59.374 12:27:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:59.374 [2024-12-16 12:27:04.819850] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:06:59.374 [2024-12-16 12:27:04.819918] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid985880 ] 00:06:59.633 [2024-12-16 12:27:05.080641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.633 [2024-12-16 12:27:05.128550] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.633 [2024-12-16 12:27:05.188103] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.892 [2024-12-16 12:27:05.204410] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:59.892 INFO: Running with entropic power schedule (0xFF, 100). 00:06:59.892 INFO: Seed: 703409695 00:06:59.892 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:06:59.892 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:06:59.892 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:59.892 INFO: A corpus is not provided, starting from an empty corpus 00:06:59.892 #2 INITED exec/s: 0 rss: 66Mb 00:06:59.892 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:59.892 This may also happen if the target rejected all inputs we tried so far 00:06:59.892 [2024-12-16 12:27:05.253158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff56ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.892 [2024-12-16 12:27:05.253191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.150 NEW_FUNC[1/717]: 0x443f18 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:00.150 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:00.150 #6 NEW cov: 12191 ft: 12186 corp: 2/16b lim: 45 exec/s: 0 rss: 72Mb L: 15/15 MS: 4 ChangeByte-CopyPart-EraseBytes-InsertRepeatedBytes- 00:07:00.150 [2024-12-16 12:27:05.593987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffff56 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.150 [2024-12-16 12:27:05.594019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.150 #7 NEW cov: 12304 ft: 12604 corp: 3/31b lim: 45 exec/s: 0 rss: 72Mb L: 15/15 MS: 1 ShuffleBytes- 00:07:00.150 [2024-12-16 12:27:05.654087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff56ff cdw11:ffff0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.150 [2024-12-16 12:27:05.654112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.150 #8 NEW cov: 12310 ft: 12881 corp: 4/47b lim: 45 exec/s: 0 rss: 72Mb L: 16/16 MS: 1 InsertByte- 00:07:00.150 [2024-12-16 12:27:05.694606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.150 [2024-12-16 12:27:05.694636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.150 [2024-12-16 12:27:05.694686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.150 [2024-12-16 12:27:05.694699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.151 [2024-12-16 12:27:05.694751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.151 [2024-12-16 12:27:05.694765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.151 [2024-12-16 12:27:05.694813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:88ff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.151 [2024-12-16 12:27:05.694826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.410 #9 NEW cov: 12395 ft: 13942 corp: 5/88b lim: 45 exec/s: 0 rss: 72Mb L: 41/41 MS: 1 InsertRepeatedBytes- 00:07:00.410 [2024-12-16 12:27:05.754385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff56ff cdw11:ffff0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.410 [2024-12-16 12:27:05.754410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.410 #10 NEW cov: 12395 ft: 13998 corp: 6/105b lim: 45 exec/s: 0 rss: 73Mb L: 17/41 MS: 1 CrossOver- 00:07:00.410 [2024-12-16 12:27:05.794449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff56ff cdw11:ffff0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.410 [2024-12-16 12:27:05.794474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.410 #11 NEW cov: 12395 ft: 14055 corp: 7/122b lim: 45 exec/s: 0 rss: 73Mb L: 17/41 MS: 1 ChangeBit- 00:07:00.410 [2024-12-16 12:27:05.855154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:11115611 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.410 [2024-12-16 12:27:05.855181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.410 [2024-12-16 12:27:05.855232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.410 [2024-12-16 12:27:05.855245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.410 [2024-12-16 12:27:05.855294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11110007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.410 [2024-12-16 12:27:05.855308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.410 [2024-12-16 12:27:05.855357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:88ffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.410 [2024-12-16 12:27:05.855370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.410 #12 NEW cov: 12395 ft: 14130 corp: 8/161b lim: 45 exec/s: 0 rss: 73Mb L: 39/41 MS: 1 InsertRepeatedBytes- 00:07:00.410 [2024-12-16 12:27:05.894888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff56ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.410 [2024-12-16 12:27:05.894914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.410 [2024-12-16 12:27:05.894962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.410 [2024-12-16 12:27:05.894976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.410 #13 NEW cov: 12395 ft: 14503 corp: 9/179b lim: 45 exec/s: 0 rss: 73Mb L: 18/41 MS: 1 CopyPart- 00:07:00.410 [2024-12-16 12:27:05.935125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:02020502 cdw11:02020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.410 [2024-12-16 12:27:05.935150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.410 [2024-12-16 12:27:05.935201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:02020202 cdw11:02020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.410 [2024-12-16 12:27:05.935214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.410 [2024-12-16 12:27:05.935265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:02020202 cdw11:02020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.410 [2024-12-16 12:27:05.935279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.410 #16 NEW cov: 12395 ft: 14857 corp: 10/209b lim: 45 exec/s: 0 rss: 73Mb L: 30/41 MS: 3 InsertRepeatedBytes-ChangeBinInt-InsertRepeatedBytes- 00:07:00.669 [2024-12-16 12:27:05.975435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:11115611 cdw11:11110002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.669 [2024-12-16 12:27:05.975461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.669 [2024-12-16 12:27:05.975511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:11115757 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.669 [2024-12-16 12:27:05.975525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.669 [2024-12-16 12:27:05.975574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.669 [2024-12-16 12:27:05.975591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.669 [2024-12-16 12:27:05.975641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffff1111 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.669 [2024-12-16 12:27:05.975655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.669 #17 NEW cov: 12395 ft: 14880 corp: 11/253b lim: 45 exec/s: 0 rss: 73Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:07:00.669 [2024-12-16 12:27:06.035171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff56ff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.669 [2024-12-16 12:27:06.035196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.669 #18 NEW cov: 12395 ft: 14925 corp: 12/268b lim: 45 exec/s: 0 rss: 73Mb L: 15/44 MS: 1 ChangeBinInt- 00:07:00.669 [2024-12-16 12:27:06.075694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:24760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.669 [2024-12-16 12:27:06.075718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.669 [2024-12-16 12:27:06.075768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.669 [2024-12-16 12:27:06.075781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.669 [2024-12-16 12:27:06.075832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.669 [2024-12-16 12:27:06.075845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.669 [2024-12-16 12:27:06.075897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff880007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.669 [2024-12-16 12:27:06.075909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.670 #19 NEW cov: 12395 ft: 14983 corp: 13/310b lim: 45 exec/s: 0 rss: 73Mb L: 42/44 MS: 1 InsertByte- 00:07:00.670 [2024-12-16 12:27:06.135870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:24760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.670 [2024-12-16 12:27:06.135895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.670 [2024-12-16 12:27:06.135947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.670 [2024-12-16 12:27:06.135959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.670 [2024-12-16 12:27:06.136008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.670 [2024-12-16 12:27:06.136021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.670 [2024-12-16 12:27:06.136071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff880007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.670 [2024-12-16 12:27:06.136083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.670 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:00.670 #20 NEW cov: 12418 ft: 15100 corp: 14/352b lim: 45 exec/s: 0 rss: 73Mb L: 42/44 MS: 1 CopyPart- 00:07:00.670 [2024-12-16 12:27:06.195573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff56ff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.670 [2024-12-16 12:27:06.195598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.930 #21 NEW cov: 12418 ft: 15112 corp: 15/366b lim: 45 exec/s: 21 rss: 73Mb L: 14/44 MS: 1 EraseBytes- 00:07:00.930 [2024-12-16 12:27:06.256194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:24760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.930 [2024-12-16 12:27:06.256219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.930 [2024-12-16 12:27:06.256270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.930 [2024-12-16 12:27:06.256284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.930 [2024-12-16 12:27:06.256335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.930 [2024-12-16 12:27:06.256349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.930 [2024-12-16 12:27:06.256397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:f6ffffff cdw11:ff880007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.930 [2024-12-16 12:27:06.256410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.930 #22 NEW cov: 12418 ft: 15139 corp: 16/408b lim: 45 exec/s: 22 rss: 73Mb L: 42/44 MS: 1 ChangeBinInt- 00:07:00.930 [2024-12-16 12:27:06.315900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff56ff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.930 [2024-12-16 12:27:06.315925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.930 #23 NEW cov: 12418 ft: 15159 corp: 17/423b lim: 45 exec/s: 23 rss: 73Mb L: 15/44 MS: 1 CMP- DE: "\001\000\000\177"- 00:07:00.930 [2024-12-16 12:27:06.356420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:76767676 cdw11:24760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.930 [2024-12-16 12:27:06.356445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.930 [2024-12-16 12:27:06.356498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.930 [2024-12-16 12:27:06.356511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.930 [2024-12-16 12:27:06.356558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00007676 cdw11:002a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.930 [2024-12-16 12:27:06.356571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.930 [2024-12-16 12:27:06.356624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff880007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.930 [2024-12-16 12:27:06.356637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.930 #24 NEW cov: 12418 ft: 15186 corp: 18/465b lim: 45 exec/s: 24 rss: 73Mb L: 42/44 MS: 1 ChangeBinInt- 00:07:00.930 [2024-12-16 12:27:06.396077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0a010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.930 [2024-12-16 12:27:06.396105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.930 #27 NEW cov: 12418 ft: 15225 corp: 19/474b lim: 45 exec/s: 27 rss: 73Mb L: 9/44 MS: 3 ShuffleBytes-PersAutoDict-CMP- DE: "\001\000\000\177"-"\000\000\000\000"- 00:07:00.930 [2024-12-16 12:27:06.436673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:11115611 cdw11:11110003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.930 [2024-12-16 12:27:06.436698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.930 [2024-12-16 12:27:06.436751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.930 [2024-12-16 12:27:06.436765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.930 [2024-12-16 12:27:06.436816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11110007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.930 [2024-12-16 12:27:06.436830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.930 [2024-12-16 12:27:06.436878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:88ffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.930 [2024-12-16 12:27:06.436890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.930 #28 NEW cov: 12418 ft: 15262 corp: 20/513b lim: 45 exec/s: 28 rss: 73Mb L: 39/44 MS: 1 ChangeByte- 00:07:00.930 [2024-12-16 12:27:06.476326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff56ff cdw11:ffff0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.930 [2024-12-16 12:27:06.476350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.190 #29 NEW cov: 12418 ft: 15272 corp: 21/529b lim: 45 exec/s: 29 rss: 73Mb L: 16/44 MS: 1 ChangeByte- 00:07:01.190 [2024-12-16 12:27:06.517049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:11115611 cdw11:11110002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.190 [2024-12-16 12:27:06.517074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.190 [2024-12-16 12:27:06.517125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:11115757 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.190 [2024-12-16 12:27:06.517139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.190 [2024-12-16 12:27:06.517168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.190 [2024-12-16 12:27:06.517179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.190 [2024-12-16 12:27:06.517195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:11ff1193 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.190 [2024-12-16 12:27:06.517205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.190 [2024-12-16 12:27:06.517221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.190 [2024-12-16 12:27:06.517231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.190 #30 NEW cov: 12418 ft: 15392 corp: 22/574b lim: 45 exec/s: 30 rss: 73Mb L: 45/45 MS: 1 InsertByte- 00:07:01.190 [2024-12-16 12:27:06.576644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00ff5606 cdw11:ffff0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.190 [2024-12-16 12:27:06.576670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.190 #31 NEW cov: 12418 ft: 15396 corp: 23/591b lim: 45 exec/s: 31 rss: 73Mb L: 17/45 MS: 1 ChangeBinInt- 00:07:01.190 [2024-12-16 12:27:06.617181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:11115611 cdw11:11110002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.190 [2024-12-16 12:27:06.617206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.190 [2024-12-16 12:27:06.617258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:11115757 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.190 [2024-12-16 12:27:06.617271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.190 [2024-12-16 12:27:06.617321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.190 [2024-12-16 12:27:06.617335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.190 [2024-12-16 12:27:06.617385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffff1111 cdw11:ff680007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.190 [2024-12-16 12:27:06.617397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.190 #32 NEW cov: 12418 ft: 15408 corp: 24/635b lim: 45 exec/s: 32 rss: 73Mb L: 44/45 MS: 1 ChangeByte- 00:07:01.190 [2024-12-16 12:27:06.656865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff0156ff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.190 [2024-12-16 12:27:06.656890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.190 #33 NEW cov: 12418 ft: 15420 corp: 25/652b lim: 45 exec/s: 33 rss: 73Mb L: 17/45 MS: 1 CMP- DE: "\001\000\000\000"- 00:07:01.190 [2024-12-16 12:27:06.697184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:56ffff56 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.190 [2024-12-16 12:27:06.697209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.190 #34 NEW cov: 12418 ft: 15430 corp: 26/669b lim: 45 exec/s: 34 rss: 73Mb L: 17/45 MS: 1 CrossOver- 00:07:01.449 [2024-12-16 12:27:06.757672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:02020502 cdw11:02020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.449 [2024-12-16 12:27:06.757698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.449 [2024-12-16 12:27:06.757750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0202020a cdw11:02020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.449 [2024-12-16 12:27:06.757766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.449 [2024-12-16 12:27:06.757815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:02020202 cdw11:02020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.449 [2024-12-16 12:27:06.757829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.449 #35 NEW cov: 12418 ft: 15435 corp: 27/699b lim: 45 exec/s: 35 rss: 74Mb L: 30/45 MS: 1 ChangeBit- 00:07:01.449 [2024-12-16 12:27:06.817859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:02020502 cdw11:02020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.449 [2024-12-16 12:27:06.817885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.449 [2024-12-16 12:27:06.817938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0202020a cdw11:02020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.449 [2024-12-16 12:27:06.817951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.449 [2024-12-16 12:27:06.818000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:02020202 cdw11:02020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.449 [2024-12-16 12:27:06.818014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.449 #36 NEW cov: 12418 ft: 15514 corp: 28/729b lim: 45 exec/s: 36 rss: 74Mb L: 30/45 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:07:01.449 [2024-12-16 12:27:06.878283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:11115625 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.450 [2024-12-16 12:27:06.878308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.450 [2024-12-16 12:27:06.878359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:57115757 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.450 [2024-12-16 12:27:06.878372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.450 [2024-12-16 12:27:06.878422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.450 [2024-12-16 12:27:06.878435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.450 [2024-12-16 12:27:06.878483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:11ff1111 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.450 [2024-12-16 12:27:06.878496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.450 [2024-12-16 12:27:06.878543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.450 [2024-12-16 12:27:06.878557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.450 #37 NEW cov: 12418 ft: 15526 corp: 29/774b lim: 45 exec/s: 37 rss: 74Mb L: 45/45 MS: 1 InsertByte- 00:07:01.450 [2024-12-16 12:27:06.938297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:11115611 cdw11:11110003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.450 [2024-12-16 12:27:06.938322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.450 [2024-12-16 12:27:06.938373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.450 [2024-12-16 12:27:06.938387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.450 [2024-12-16 12:27:06.938437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00270000 cdw11:11110007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.450 [2024-12-16 12:27:06.938451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.450 [2024-12-16 12:27:06.938499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:88ffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.450 [2024-12-16 12:27:06.938516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.450 #38 NEW cov: 12418 ft: 15538 corp: 30/813b lim: 45 exec/s: 38 rss: 74Mb L: 39/45 MS: 1 ChangeBinInt- 00:07:01.450 [2024-12-16 12:27:06.998512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:11115611 cdw11:11110003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.450 [2024-12-16 12:27:06.998537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.450 [2024-12-16 12:27:06.998590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.450 [2024-12-16 12:27:06.998603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.450 [2024-12-16 12:27:06.998662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:57575757 cdw11:57110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.450 [2024-12-16 12:27:06.998687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.450 [2024-12-16 12:27:06.998738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.450 [2024-12-16 12:27:06.998751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.709 #39 NEW cov: 12418 ft: 15603 corp: 31/852b lim: 45 exec/s: 39 rss: 74Mb L: 39/45 MS: 1 CrossOver- 00:07:01.709 [2024-12-16 12:27:07.038099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffff56 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.709 [2024-12-16 12:27:07.038123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.709 #40 NEW cov: 12418 ft: 15625 corp: 32/867b lim: 45 exec/s: 40 rss: 74Mb L: 15/45 MS: 1 ChangeBinInt- 00:07:01.709 [2024-12-16 12:27:07.078281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffff56 cdw11:c9ff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.709 [2024-12-16 12:27:07.078305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.709 #41 NEW cov: 12418 ft: 15642 corp: 33/882b lim: 45 exec/s: 41 rss: 74Mb L: 15/45 MS: 1 ChangeByte- 00:07:01.709 [2024-12-16 12:27:07.118386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff56ff cdw11:ffff0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.709 [2024-12-16 12:27:07.118412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.709 #42 NEW cov: 12418 ft: 15662 corp: 34/899b lim: 45 exec/s: 42 rss: 74Mb L: 17/45 MS: 1 CrossOver- 00:07:01.710 [2024-12-16 12:27:07.179144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:11115625 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.710 [2024-12-16 12:27:07.179168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.710 [2024-12-16 12:27:07.179221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:57115757 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.710 [2024-12-16 12:27:07.179234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.710 [2024-12-16 12:27:07.179283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11110000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.710 [2024-12-16 12:27:07.179297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.710 [2024-12-16 12:27:07.179347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:11ff1811 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.710 [2024-12-16 12:27:07.179360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.710 [2024-12-16 12:27:07.179409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.710 [2024-12-16 12:27:07.179423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.710 #43 NEW cov: 12418 ft: 15684 corp: 35/944b lim: 45 exec/s: 43 rss: 74Mb L: 45/45 MS: 1 ChangeBinInt- 00:07:01.710 [2024-12-16 12:27:07.238740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.710 [2024-12-16 12:27:07.238765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.710 #44 NEW cov: 12418 ft: 15692 corp: 36/959b lim: 45 exec/s: 22 rss: 74Mb L: 15/45 MS: 1 CrossOver- 00:07:01.710 #44 DONE cov: 12418 ft: 15692 corp: 36/959b lim: 45 exec/s: 22 rss: 74Mb 00:07:01.710 ###### Recommended dictionary. ###### 00:07:01.710 "\001\000\000\177" # Uses: 1 00:07:01.710 "\000\000\000\000" # Uses: 1 00:07:01.710 "\001\000\000\000" # Uses: 0 00:07:01.710 ###### End of recommended dictionary. ###### 00:07:01.710 Done 44 runs in 2 second(s) 00:07:01.969 12:27:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:07:01.969 12:27:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:01.969 12:27:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:01.969 12:27:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:01.969 12:27:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:01.969 12:27:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:01.969 12:27:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:01.969 12:27:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:01.969 12:27:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:01.969 12:27:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:01.969 12:27:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:01.969 12:27:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:07:01.969 12:27:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:07:01.969 12:27:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:01.969 12:27:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:01.969 12:27:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:01.969 12:27:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:01.969 12:27:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:01.969 12:27:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:07:01.969 [2024-12-16 12:27:07.411102] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:01.969 [2024-12-16 12:27:07.411173] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid986534 ] 00:07:02.228 [2024-12-16 12:27:07.670103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.228 [2024-12-16 12:27:07.729155] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.228 [2024-12-16 12:27:07.788294] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.488 [2024-12-16 12:27:07.804596] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:02.488 INFO: Running with entropic power schedule (0xFF, 100). 00:07:02.488 INFO: Seed: 3305424032 00:07:02.488 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:02.488 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:02.488 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:02.488 INFO: A corpus is not provided, starting from an empty corpus 00:07:02.488 #2 INITED exec/s: 0 rss: 65Mb 00:07:02.488 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:02.488 This may also happen if the target rejected all inputs we tried so far 00:07:02.488 [2024-12-16 12:27:07.871373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a05 cdw11:00000000 00:07:02.488 [2024-12-16 12:27:07.871407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.747 NEW_FUNC[1/715]: 0x446728 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:02.747 NEW_FUNC[2/715]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:02.747 #3 NEW cov: 12108 ft: 12077 corp: 2/3b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 InsertByte- 00:07:02.747 [2024-12-16 12:27:08.221709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f8fa cdw11:00000000 00:07:02.747 [2024-12-16 12:27:08.221755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.747 #4 NEW cov: 12221 ft: 12833 corp: 3/5b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ChangeBinInt- 00:07:02.747 [2024-12-16 12:27:08.281726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002501 cdw11:00000000 00:07:02.747 [2024-12-16 12:27:08.281756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.747 #8 NEW cov: 12227 ft: 13015 corp: 4/7b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 4 EraseBytes-ChangeBit-ChangeBinInt-InsertByte- 00:07:03.007 [2024-12-16 12:27:08.331831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003d2c cdw11:00000000 00:07:03.007 [2024-12-16 12:27:08.331859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.007 #10 NEW cov: 12312 ft: 13342 corp: 5/9b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 2 ChangeByte-InsertByte- 00:07:03.007 [2024-12-16 12:27:08.371971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a25 cdw11:00000000 00:07:03.007 [2024-12-16 12:27:08.371998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.007 #11 NEW cov: 12312 ft: 13486 corp: 6/12b lim: 10 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 CrossOver- 00:07:03.007 [2024-12-16 12:27:08.412053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003d41 cdw11:00000000 00:07:03.007 [2024-12-16 12:27:08.412078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.007 #12 NEW cov: 12312 ft: 13514 corp: 7/15b lim: 10 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 InsertByte- 00:07:03.007 [2024-12-16 12:27:08.472223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f8f8 cdw11:00000000 00:07:03.007 [2024-12-16 12:27:08.472253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.007 #14 NEW cov: 12312 ft: 13625 corp: 8/17b lim: 10 exec/s: 0 rss: 72Mb L: 2/3 MS: 2 EraseBytes-CopyPart- 00:07:03.007 [2024-12-16 12:27:08.532622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002bff cdw11:00000000 00:07:03.007 [2024-12-16 12:27:08.532649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.007 [2024-12-16 12:27:08.532763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:03.007 [2024-12-16 12:27:08.532778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.007 #16 NEW cov: 12312 ft: 13833 corp: 9/22b lim: 10 exec/s: 0 rss: 72Mb L: 5/5 MS: 2 ChangeByte-CMP- DE: "\377\377\377\377"- 00:07:03.266 [2024-12-16 12:27:08.572547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f8f8 cdw11:00000000 00:07:03.266 [2024-12-16 12:27:08.572576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.266 #17 NEW cov: 12312 ft: 13870 corp: 10/24b lim: 10 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:03.266 [2024-12-16 12:27:08.632970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000bbbb cdw11:00000000 00:07:03.266 [2024-12-16 12:27:08.632997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.266 [2024-12-16 12:27:08.633132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000bb3d cdw11:00000000 00:07:03.266 [2024-12-16 12:27:08.633150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.266 #20 NEW cov: 12312 ft: 13928 corp: 11/28b lim: 10 exec/s: 0 rss: 72Mb L: 4/5 MS: 3 ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:07:03.266 [2024-12-16 12:27:08.672970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000d000 cdw11:00000000 00:07:03.266 [2024-12-16 12:27:08.672996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.266 [2024-12-16 12:27:08.673123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:03.266 [2024-12-16 12:27:08.673140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.266 #21 NEW cov: 12312 ft: 13939 corp: 12/33b lim: 10 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeBinInt- 00:07:03.266 [2024-12-16 12:27:08.732926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000093f8 cdw11:00000000 00:07:03.266 [2024-12-16 12:27:08.732952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.266 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:03.266 #25 NEW cov: 12335 ft: 13977 corp: 13/36b lim: 10 exec/s: 0 rss: 72Mb L: 3/5 MS: 4 EraseBytes-CopyPart-ChangeByte-CrossOver- 00:07:03.266 [2024-12-16 12:27:08.773112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a25 cdw11:00000000 00:07:03.266 [2024-12-16 12:27:08.773140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.266 #26 NEW cov: 12335 ft: 14058 corp: 14/39b lim: 10 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 ShuffleBytes- 00:07:03.526 [2024-12-16 12:27:08.843439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000250a cdw11:00000000 00:07:03.526 [2024-12-16 12:27:08.843466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.526 [2024-12-16 12:27:08.843597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000125 cdw11:00000000 00:07:03.526 [2024-12-16 12:27:08.843617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.526 #27 NEW cov: 12335 ft: 14061 corp: 15/44b lim: 10 exec/s: 27 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:07:03.526 [2024-12-16 12:27:08.914048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002b34 cdw11:00000000 00:07:03.526 [2024-12-16 12:27:08.914075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.526 [2024-12-16 12:27:08.914187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003434 cdw11:00000000 00:07:03.526 [2024-12-16 12:27:08.914204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.526 [2024-12-16 12:27:08.914314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:03.526 [2024-12-16 12:27:08.914330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.526 [2024-12-16 12:27:08.914437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:03.526 [2024-12-16 12:27:08.914453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.526 #28 NEW cov: 12335 ft: 14328 corp: 16/52b lim: 10 exec/s: 28 rss: 73Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:03.526 [2024-12-16 12:27:08.953676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f8f8 cdw11:00000000 00:07:03.526 [2024-12-16 12:27:08.953702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.526 #29 NEW cov: 12335 ft: 14392 corp: 17/54b lim: 10 exec/s: 29 rss: 73Mb L: 2/8 MS: 1 CrossOver- 00:07:03.526 [2024-12-16 12:27:08.993716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f8f8 cdw11:00000000 00:07:03.526 [2024-12-16 12:27:08.993742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.526 #30 NEW cov: 12335 ft: 14405 corp: 18/57b lim: 10 exec/s: 30 rss: 73Mb L: 3/8 MS: 1 InsertByte- 00:07:03.526 [2024-12-16 12:27:09.063960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002c3d cdw11:00000000 00:07:03.526 [2024-12-16 12:27:09.063988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.786 #31 NEW cov: 12335 ft: 14442 corp: 19/60b lim: 10 exec/s: 31 rss: 73Mb L: 3/8 MS: 1 ShuffleBytes- 00:07:03.786 [2024-12-16 12:27:09.134551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f8f8 cdw11:00000000 00:07:03.786 [2024-12-16 12:27:09.134577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.786 [2024-12-16 12:27:09.134692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:03.786 [2024-12-16 12:27:09.134710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.786 [2024-12-16 12:27:09.134820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:03.786 [2024-12-16 12:27:09.134836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.786 #32 NEW cov: 12335 ft: 14577 corp: 20/66b lim: 10 exec/s: 32 rss: 73Mb L: 6/8 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:07:03.786 [2024-12-16 12:27:09.184568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f8f8 cdw11:00000000 00:07:03.786 [2024-12-16 12:27:09.184594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.786 [2024-12-16 12:27:09.184713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003d41 cdw11:00000000 00:07:03.786 [2024-12-16 12:27:09.184732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.786 #33 NEW cov: 12335 ft: 14608 corp: 21/71b lim: 10 exec/s: 33 rss: 73Mb L: 5/8 MS: 1 CrossOver- 00:07:03.786 [2024-12-16 12:27:09.224396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f0f8 cdw11:00000000 00:07:03.786 [2024-12-16 12:27:09.224423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.786 #34 NEW cov: 12335 ft: 14618 corp: 22/73b lim: 10 exec/s: 34 rss: 73Mb L: 2/8 MS: 1 ChangeBit- 00:07:03.786 [2024-12-16 12:27:09.264876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002bff cdw11:00000000 00:07:03.786 [2024-12-16 12:27:09.264903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.786 [2024-12-16 12:27:09.265010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000f8f8 cdw11:00000000 00:07:03.786 [2024-12-16 12:27:09.265043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.786 [2024-12-16 12:27:09.265155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:03.786 [2024-12-16 12:27:09.265170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.786 #35 NEW cov: 12335 ft: 14633 corp: 23/80b lim: 10 exec/s: 35 rss: 73Mb L: 7/8 MS: 1 CrossOver- 00:07:03.786 [2024-12-16 12:27:09.304645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f8f8 cdw11:00000000 00:07:03.786 [2024-12-16 12:27:09.304671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.786 #36 NEW cov: 12335 ft: 14640 corp: 24/82b lim: 10 exec/s: 36 rss: 73Mb L: 2/8 MS: 1 ShuffleBytes- 00:07:03.786 [2024-12-16 12:27:09.344968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000bbbb cdw11:00000000 00:07:03.786 [2024-12-16 12:27:09.344994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.786 [2024-12-16 12:27:09.345104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000bb3d cdw11:00000000 00:07:03.786 [2024-12-16 12:27:09.345121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.046 #37 NEW cov: 12335 ft: 14651 corp: 25/86b lim: 10 exec/s: 37 rss: 73Mb L: 4/8 MS: 1 ShuffleBytes- 00:07:04.046 [2024-12-16 12:27:09.404938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a25 cdw11:00000000 00:07:04.046 [2024-12-16 12:27:09.404966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.046 #38 NEW cov: 12335 ft: 14678 corp: 26/88b lim: 10 exec/s: 38 rss: 73Mb L: 2/8 MS: 1 EraseBytes- 00:07:04.046 [2024-12-16 12:27:09.455030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ba3d cdw11:00000000 00:07:04.046 [2024-12-16 12:27:09.455057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.046 #39 NEW cov: 12335 ft: 14702 corp: 27/91b lim: 10 exec/s: 39 rss: 73Mb L: 3/8 MS: 1 ChangeByte- 00:07:04.046 [2024-12-16 12:27:09.515237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f8f8 cdw11:00000000 00:07:04.046 [2024-12-16 12:27:09.515265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.046 #40 NEW cov: 12335 ft: 14707 corp: 28/94b lim: 10 exec/s: 40 rss: 73Mb L: 3/8 MS: 1 InsertByte- 00:07:04.046 [2024-12-16 12:27:09.575769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a25 cdw11:00000000 00:07:04.046 [2024-12-16 12:27:09.575796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.046 [2024-12-16 12:27:09.575904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009797 cdw11:00000000 00:07:04.046 [2024-12-16 12:27:09.575921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.046 [2024-12-16 12:27:09.576033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009701 cdw11:00000000 00:07:04.046 [2024-12-16 12:27:09.576049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.046 #41 NEW cov: 12335 ft: 14720 corp: 29/100b lim: 10 exec/s: 41 rss: 73Mb L: 6/8 MS: 1 InsertRepeatedBytes- 00:07:04.305 [2024-12-16 12:27:09.625976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f88f cdw11:00000000 00:07:04.305 [2024-12-16 12:27:09.626004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.305 [2024-12-16 12:27:09.626124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00008f8f cdw11:00000000 00:07:04.305 [2024-12-16 12:27:09.626140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.305 [2024-12-16 12:27:09.626246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00008f8f cdw11:00000000 00:07:04.305 [2024-12-16 12:27:09.626263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.306 #42 NEW cov: 12335 ft: 14763 corp: 30/107b lim: 10 exec/s: 42 rss: 73Mb L: 7/8 MS: 1 InsertRepeatedBytes- 00:07:04.306 [2024-12-16 12:27:09.666122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f8ff cdw11:00000000 00:07:04.306 [2024-12-16 12:27:09.666150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.306 [2024-12-16 12:27:09.666262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:04.306 [2024-12-16 12:27:09.666280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.306 [2024-12-16 12:27:09.666386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000fff8 cdw11:00000000 00:07:04.306 [2024-12-16 12:27:09.666401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.306 #43 NEW cov: 12335 ft: 14775 corp: 31/113b lim: 10 exec/s: 43 rss: 73Mb L: 6/8 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:07:04.306 [2024-12-16 12:27:09.726242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f8fa cdw11:00000000 00:07:04.306 [2024-12-16 12:27:09.726271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.306 [2024-12-16 12:27:09.726385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:04.306 [2024-12-16 12:27:09.726402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.306 [2024-12-16 12:27:09.726515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:04.306 [2024-12-16 12:27:09.726532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.306 #44 NEW cov: 12335 ft: 14779 corp: 32/119b lim: 10 exec/s: 44 rss: 73Mb L: 6/8 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:07:04.306 [2024-12-16 12:27:09.765939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f8f8 cdw11:00000000 00:07:04.306 [2024-12-16 12:27:09.765967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.306 #45 NEW cov: 12335 ft: 14853 corp: 33/122b lim: 10 exec/s: 45 rss: 73Mb L: 3/8 MS: 1 CrossOver- 00:07:04.306 [2024-12-16 12:27:09.836164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002c29 cdw11:00000000 00:07:04.306 [2024-12-16 12:27:09.836190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.306 #46 NEW cov: 12335 ft: 14878 corp: 34/125b lim: 10 exec/s: 23 rss: 73Mb L: 3/8 MS: 1 CrossOver- 00:07:04.306 #46 DONE cov: 12335 ft: 14878 corp: 34/125b lim: 10 exec/s: 23 rss: 73Mb 00:07:04.306 ###### Recommended dictionary. ###### 00:07:04.306 "\377\377\377\377" # Uses: 3 00:07:04.306 ###### End of recommended dictionary. ###### 00:07:04.306 Done 46 runs in 2 second(s) 00:07:04.565 12:27:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:07:04.565 12:27:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:04.565 12:27:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:04.565 12:27:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:04.565 12:27:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:04.565 12:27:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:04.565 12:27:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:04.565 12:27:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:04.565 12:27:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:04.565 12:27:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:04.565 12:27:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:04.565 12:27:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:07:04.565 12:27:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:07:04.565 12:27:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:04.565 12:27:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:04.565 12:27:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:04.565 12:27:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:04.565 12:27:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:04.565 12:27:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:07:04.565 [2024-12-16 12:27:10.010475] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:04.566 [2024-12-16 12:27:10.010540] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid987067 ] 00:07:04.825 [2024-12-16 12:27:10.288437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.825 [2024-12-16 12:27:10.348455] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.084 [2024-12-16 12:27:10.407731] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.084 [2024-12-16 12:27:10.424055] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:05.084 INFO: Running with entropic power schedule (0xFF, 100). 00:07:05.084 INFO: Seed: 1629443359 00:07:05.084 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:05.084 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:05.084 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:05.084 INFO: A corpus is not provided, starting from an empty corpus 00:07:05.084 #2 INITED exec/s: 0 rss: 66Mb 00:07:05.084 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:05.084 This may also happen if the target rejected all inputs we tried so far 00:07:05.084 [2024-12-16 12:27:10.491014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ce5b cdw11:00000000 00:07:05.084 [2024-12-16 12:27:10.491051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.343 NEW_FUNC[1/714]: 0x447128 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:05.343 NEW_FUNC[2/714]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:05.343 #4 NEW cov: 12107 ft: 12106 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 2 ChangeByte-InsertByte- 00:07:05.343 [2024-12-16 12:27:10.821269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ce5b cdw11:00000000 00:07:05.343 [2024-12-16 12:27:10.821305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.343 NEW_FUNC[1/1]: 0xfabab8 in spdk_process_is_primary /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:315 00:07:05.343 #5 NEW cov: 12221 ft: 12800 corp: 3/5b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ShuffleBytes- 00:07:05.343 [2024-12-16 12:27:10.891474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ce5b cdw11:00000000 00:07:05.343 [2024-12-16 12:27:10.891502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.603 #6 NEW cov: 12227 ft: 13106 corp: 4/7b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ShuffleBytes- 00:07:05.603 [2024-12-16 12:27:10.962352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ce5b cdw11:00000000 00:07:05.603 [2024-12-16 12:27:10.962378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.603 [2024-12-16 12:27:10.962478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000caca cdw11:00000000 00:07:05.603 [2024-12-16 12:27:10.962495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.603 [2024-12-16 12:27:10.962604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000caca cdw11:00000000 00:07:05.603 [2024-12-16 12:27:10.962624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.603 [2024-12-16 12:27:10.962734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000caca cdw11:00000000 00:07:05.603 [2024-12-16 12:27:10.962751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.603 [2024-12-16 12:27:10.962861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000caca cdw11:00000000 00:07:05.603 [2024-12-16 12:27:10.962880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.603 #7 NEW cov: 12312 ft: 13668 corp: 5/17b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:05.603 [2024-12-16 12:27:11.031816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000024ce cdw11:00000000 00:07:05.603 [2024-12-16 12:27:11.031844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.603 #8 NEW cov: 12312 ft: 13760 corp: 6/20b lim: 10 exec/s: 0 rss: 73Mb L: 3/10 MS: 1 InsertByte- 00:07:05.603 [2024-12-16 12:27:11.082802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ce00 cdw11:00000000 00:07:05.603 [2024-12-16 12:27:11.082827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.603 [2024-12-16 12:27:11.082936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000055f cdw11:00000000 00:07:05.603 [2024-12-16 12:27:11.082968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.603 [2024-12-16 12:27:11.083074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a66f cdw11:00000000 00:07:05.603 [2024-12-16 12:27:11.083090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.603 [2024-12-16 12:27:11.083203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000add1 cdw11:00000000 00:07:05.603 [2024-12-16 12:27:11.083220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.603 [2024-12-16 12:27:11.083323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000e65b cdw11:00000000 00:07:05.603 [2024-12-16 12:27:11.083339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.603 #9 NEW cov: 12312 ft: 13818 corp: 7/30b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 CMP- DE: "\000\005_\246o\255\321\346"- 00:07:05.603 [2024-12-16 12:27:11.132957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ce01 cdw11:00000000 00:07:05.603 [2024-12-16 12:27:11.132984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.603 [2024-12-16 12:27:11.133112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:05.603 [2024-12-16 12:27:11.133130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.603 [2024-12-16 12:27:11.133239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:05.603 [2024-12-16 12:27:11.133255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.603 [2024-12-16 12:27:11.133359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:05.603 [2024-12-16 12:27:11.133375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.603 [2024-12-16 12:27:11.133480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:000004ca cdw11:00000000 00:07:05.603 [2024-12-16 12:27:11.133497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.866 #10 NEW cov: 12312 ft: 13900 corp: 8/40b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\004"- 00:07:05.866 [2024-12-16 12:27:11.202532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ce00 cdw11:00000000 00:07:05.866 [2024-12-16 12:27:11.202558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.866 [2024-12-16 12:27:11.202693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000055f cdw11:00000000 00:07:05.866 [2024-12-16 12:27:11.202709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.866 #11 NEW cov: 12312 ft: 14095 corp: 9/45b lim: 10 exec/s: 0 rss: 74Mb L: 5/10 MS: 1 CrossOver- 00:07:05.866 [2024-12-16 12:27:11.253313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ce00 cdw11:00000000 00:07:05.866 [2024-12-16 12:27:11.253338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.866 [2024-12-16 12:27:11.253466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000565 cdw11:00000000 00:07:05.866 [2024-12-16 12:27:11.253483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.866 [2024-12-16 12:27:11.253597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00006565 cdw11:00000000 00:07:05.866 [2024-12-16 12:27:11.253618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.866 [2024-12-16 12:27:11.253729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00006565 cdw11:00000000 00:07:05.866 [2024-12-16 12:27:11.253746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.866 [2024-12-16 12:27:11.253854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00005f5b cdw11:00000000 00:07:05.866 [2024-12-16 12:27:11.253871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.866 #12 NEW cov: 12312 ft: 14125 corp: 10/55b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:05.866 [2024-12-16 12:27:11.322720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ce5b cdw11:00000000 00:07:05.866 [2024-12-16 12:27:11.322748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.866 #13 NEW cov: 12312 ft: 14189 corp: 11/57b lim: 10 exec/s: 0 rss: 74Mb L: 2/10 MS: 1 CopyPart- 00:07:05.866 [2024-12-16 12:27:11.373159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ce20 cdw11:00000000 00:07:05.866 [2024-12-16 12:27:11.373187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.866 [2024-12-16 12:27:11.373299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000055f cdw11:00000000 00:07:05.866 [2024-12-16 12:27:11.373315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.866 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:05.866 #14 NEW cov: 12335 ft: 14261 corp: 12/62b lim: 10 exec/s: 0 rss: 74Mb L: 5/10 MS: 1 ChangeBit- 00:07:05.866 [2024-12-16 12:27:11.423305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ce00 cdw11:00000000 00:07:05.866 [2024-12-16 12:27:11.423331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.866 [2024-12-16 12:27:11.423433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000055f cdw11:00000000 00:07:05.866 [2024-12-16 12:27:11.423452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.162 #15 NEW cov: 12335 ft: 14276 corp: 13/67b lim: 10 exec/s: 0 rss: 74Mb L: 5/10 MS: 1 CopyPart- 00:07:06.162 [2024-12-16 12:27:11.473232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cf5b cdw11:00000000 00:07:06.162 [2024-12-16 12:27:11.473257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.162 #16 NEW cov: 12335 ft: 14294 corp: 14/69b lim: 10 exec/s: 16 rss: 74Mb L: 2/10 MS: 1 ChangeBit- 00:07:06.162 [2024-12-16 12:27:11.544274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ce00 cdw11:00000000 00:07:06.162 [2024-12-16 12:27:11.544299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.162 [2024-12-16 12:27:11.544413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000055f cdw11:00000000 00:07:06.162 [2024-12-16 12:27:11.544429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.162 [2024-12-16 12:27:11.544541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a66f cdw11:00000000 00:07:06.162 [2024-12-16 12:27:11.544557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.162 [2024-12-16 12:27:11.544669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000add1 cdw11:00000000 00:07:06.162 [2024-12-16 12:27:11.544685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.162 [2024-12-16 12:27:11.544804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000005 cdw11:00000000 00:07:06.162 [2024-12-16 12:27:11.544821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.162 #17 NEW cov: 12335 ft: 14304 corp: 15/79b lim: 10 exec/s: 17 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:07:06.162 [2024-12-16 12:27:11.614425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002a00 cdw11:00000000 00:07:06.162 [2024-12-16 12:27:11.614451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.162 [2024-12-16 12:27:11.614557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:06.163 [2024-12-16 12:27:11.614573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.163 [2024-12-16 12:27:11.614703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:06.163 [2024-12-16 12:27:11.614720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.163 [2024-12-16 12:27:11.614851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:06.163 [2024-12-16 12:27:11.614868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.163 [2024-12-16 12:27:11.614984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:07:06.163 [2024-12-16 12:27:11.615000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.163 #20 NEW cov: 12335 ft: 14361 corp: 16/89b lim: 10 exec/s: 20 rss: 74Mb L: 10/10 MS: 3 ChangeByte-ChangeBit-InsertRepeatedBytes- 00:07:06.163 [2024-12-16 12:27:11.663977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002a00 cdw11:00000000 00:07:06.163 [2024-12-16 12:27:11.664007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.163 [2024-12-16 12:27:11.664122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000005b cdw11:00000000 00:07:06.163 [2024-12-16 12:27:11.664155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.163 #22 NEW cov: 12335 ft: 14371 corp: 17/93b lim: 10 exec/s: 22 rss: 74Mb L: 4/10 MS: 2 EraseBytes-CrossOver- 00:07:06.163 [2024-12-16 12:27:11.714157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ce00 cdw11:00000000 00:07:06.163 [2024-12-16 12:27:11.714185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.163 [2024-12-16 12:27:11.714292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000055f cdw11:00000000 00:07:06.163 [2024-12-16 12:27:11.714310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.445 #23 NEW cov: 12335 ft: 14403 corp: 18/98b lim: 10 exec/s: 23 rss: 74Mb L: 5/10 MS: 1 ShuffleBytes- 00:07:06.445 [2024-12-16 12:27:11.764303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002a00 cdw11:00000000 00:07:06.445 [2024-12-16 12:27:11.764329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.445 [2024-12-16 12:27:11.764441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000005b cdw11:00000000 00:07:06.445 [2024-12-16 12:27:11.764456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.445 #24 NEW cov: 12335 ft: 14441 corp: 19/103b lim: 10 exec/s: 24 rss: 74Mb L: 5/10 MS: 1 CopyPart- 00:07:06.445 [2024-12-16 12:27:11.835144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ce00 cdw11:00000000 00:07:06.445 [2024-12-16 12:27:11.835170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.445 [2024-12-16 12:27:11.835282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000055f cdw11:00000000 00:07:06.445 [2024-12-16 12:27:11.835298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.445 [2024-12-16 12:27:11.835409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005b00 cdw11:00000000 00:07:06.445 [2024-12-16 12:27:11.835424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.445 [2024-12-16 12:27:11.835533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:06.445 [2024-12-16 12:27:11.835549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.445 [2024-12-16 12:27:11.835665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:07:06.445 [2024-12-16 12:27:11.835681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.445 #25 NEW cov: 12335 ft: 14455 corp: 20/113b lim: 10 exec/s: 25 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:07:06.445 [2024-12-16 12:27:11.904515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005bce cdw11:00000000 00:07:06.445 [2024-12-16 12:27:11.904541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.445 #26 NEW cov: 12335 ft: 14463 corp: 21/116b lim: 10 exec/s: 26 rss: 74Mb L: 3/10 MS: 1 CrossOver- 00:07:06.445 [2024-12-16 12:27:11.975585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c600 cdw11:00000000 00:07:06.445 [2024-12-16 12:27:11.975616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.445 [2024-12-16 12:27:11.975729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000565 cdw11:00000000 00:07:06.446 [2024-12-16 12:27:11.975764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.446 [2024-12-16 12:27:11.975877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00006565 cdw11:00000000 00:07:06.446 [2024-12-16 12:27:11.975895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.446 [2024-12-16 12:27:11.976007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00006565 cdw11:00000000 00:07:06.446 [2024-12-16 12:27:11.976027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.446 [2024-12-16 12:27:11.976143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00005f5b cdw11:00000000 00:07:06.446 [2024-12-16 12:27:11.976160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.705 #27 NEW cov: 12335 ft: 14512 corp: 22/126b lim: 10 exec/s: 27 rss: 74Mb L: 10/10 MS: 1 ChangeBinInt- 00:07:06.705 [2024-12-16 12:27:12.045324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ce00 cdw11:00000000 00:07:06.705 [2024-12-16 12:27:12.045351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.705 [2024-12-16 12:27:12.045458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000055f cdw11:00000000 00:07:06.705 [2024-12-16 12:27:12.045474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.705 [2024-12-16 12:27:12.045576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a66f cdw11:00000000 00:07:06.705 [2024-12-16 12:27:12.045592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.705 #28 NEW cov: 12335 ft: 14643 corp: 23/133b lim: 10 exec/s: 28 rss: 74Mb L: 7/10 MS: 1 EraseBytes- 00:07:06.705 [2024-12-16 12:27:12.095925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d6ff cdw11:00000000 00:07:06.705 [2024-12-16 12:27:12.095950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.705 [2024-12-16 12:27:12.096065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:06.705 [2024-12-16 12:27:12.096082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.705 [2024-12-16 12:27:12.096187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:06.705 [2024-12-16 12:27:12.096203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.705 [2024-12-16 12:27:12.096318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000fffe cdw11:00000000 00:07:06.705 [2024-12-16 12:27:12.096333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.705 [2024-12-16 12:27:12.096445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:07:06.705 [2024-12-16 12:27:12.096460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.705 #29 NEW cov: 12335 ft: 14672 corp: 24/143b lim: 10 exec/s: 29 rss: 74Mb L: 10/10 MS: 1 ChangeBinInt- 00:07:06.705 [2024-12-16 12:27:12.145194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a62 cdw11:00000000 00:07:06.705 [2024-12-16 12:27:12.145220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.705 #30 NEW cov: 12335 ft: 14706 corp: 25/145b lim: 10 exec/s: 30 rss: 74Mb L: 2/10 MS: 1 InsertByte- 00:07:06.705 [2024-12-16 12:27:12.195434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000024ce cdw11:00000000 00:07:06.705 [2024-12-16 12:27:12.195460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.705 #31 NEW cov: 12335 ft: 14735 corp: 26/148b lim: 10 exec/s: 31 rss: 74Mb L: 3/10 MS: 1 ChangeByte- 00:07:06.705 [2024-12-16 12:27:12.246348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d6ff cdw11:00000000 00:07:06.705 [2024-12-16 12:27:12.246372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.705 [2024-12-16 12:27:12.246490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:06.705 [2024-12-16 12:27:12.246522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.705 [2024-12-16 12:27:12.246635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:06.705 [2024-12-16 12:27:12.246651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.705 [2024-12-16 12:27:12.246775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000fffe cdw11:00000000 00:07:06.705 [2024-12-16 12:27:12.246792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.705 [2024-12-16 12:27:12.246899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:07:06.705 [2024-12-16 12:27:12.246916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.965 #32 NEW cov: 12335 ft: 14760 corp: 27/158b lim: 10 exec/s: 32 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:07:06.965 [2024-12-16 12:27:12.316537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ce00 cdw11:00000000 00:07:06.965 [2024-12-16 12:27:12.316561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.965 [2024-12-16 12:27:12.316687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000005d5 cdw11:00000000 00:07:06.965 [2024-12-16 12:27:12.316704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.965 [2024-12-16 12:27:12.316812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005b00 cdw11:00000000 00:07:06.965 [2024-12-16 12:27:12.316828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.965 [2024-12-16 12:27:12.316935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:06.965 [2024-12-16 12:27:12.316951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.965 [2024-12-16 12:27:12.317070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:07:06.965 [2024-12-16 12:27:12.317087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.965 #33 NEW cov: 12335 ft: 14778 corp: 28/168b lim: 10 exec/s: 33 rss: 75Mb L: 10/10 MS: 1 ChangeByte- 00:07:06.965 [2024-12-16 12:27:12.386779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d6ff cdw11:00000000 00:07:06.965 [2024-12-16 12:27:12.386806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.965 [2024-12-16 12:27:12.386921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:06.965 [2024-12-16 12:27:12.386936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.965 [2024-12-16 12:27:12.387037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:06.965 [2024-12-16 12:27:12.387053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.965 [2024-12-16 12:27:12.387155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffcc cdw11:00000000 00:07:06.965 [2024-12-16 12:27:12.387170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.965 [2024-12-16 12:27:12.387273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:07:06.965 [2024-12-16 12:27:12.387288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.965 #34 NEW cov: 12335 ft: 14801 corp: 29/178b lim: 10 exec/s: 34 rss: 75Mb L: 10/10 MS: 1 ChangeByte- 00:07:06.965 [2024-12-16 12:27:12.457084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002a00 cdw11:00000000 00:07:06.965 [2024-12-16 12:27:12.457109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.965 [2024-12-16 12:27:12.457235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:06.965 [2024-12-16 12:27:12.457253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.965 [2024-12-16 12:27:12.457363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:06.965 [2024-12-16 12:27:12.457381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.965 [2024-12-16 12:27:12.457482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:06.965 [2024-12-16 12:27:12.457499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.965 [2024-12-16 12:27:12.457606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:07:06.965 [2024-12-16 12:27:12.457628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.965 #35 NEW cov: 12335 ft: 14816 corp: 30/188b lim: 10 exec/s: 17 rss: 75Mb L: 10/10 MS: 1 CopyPart- 00:07:06.965 #35 DONE cov: 12335 ft: 14816 corp: 30/188b lim: 10 exec/s: 17 rss: 75Mb 00:07:06.965 ###### Recommended dictionary. ###### 00:07:06.965 "\000\005_\246o\255\321\346" # Uses: 0 00:07:06.965 "\001\000\000\000\000\000\000\004" # Uses: 0 00:07:06.965 ###### End of recommended dictionary. ###### 00:07:06.965 Done 35 runs in 2 second(s) 00:07:07.225 12:27:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:07:07.225 12:27:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:07.225 12:27:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:07.225 12:27:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:07.225 12:27:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:07.225 12:27:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:07.225 12:27:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:07.225 12:27:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:07.225 12:27:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:07.225 12:27:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:07.225 12:27:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:07.225 12:27:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:07:07.225 12:27:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:07:07.225 12:27:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:07.225 12:27:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:07.225 12:27:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:07.225 12:27:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:07.225 12:27:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:07.225 12:27:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:07:07.225 [2024-12-16 12:27:12.630394] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:07.225 [2024-12-16 12:27:12.630460] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid987454 ] 00:07:07.484 [2024-12-16 12:27:12.894622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.484 [2024-12-16 12:27:12.950546] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.484 [2024-12-16 12:27:13.009355] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.484 [2024-12-16 12:27:13.025671] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:07.484 INFO: Running with entropic power schedule (0xFF, 100). 00:07:07.484 INFO: Seed: 4229430011 00:07:07.743 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:07.743 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:07.743 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:07.743 INFO: A corpus is not provided, starting from an empty corpus 00:07:07.744 [2024-12-16 12:27:13.074254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.744 [2024-12-16 12:27:13.074282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.744 #2 INITED cov: 12135 ft: 12122 corp: 1/1b exec/s: 0 rss: 70Mb 00:07:07.744 [2024-12-16 12:27:13.114462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.744 [2024-12-16 12:27:13.114489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.744 [2024-12-16 12:27:13.114550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.744 [2024-12-16 12:27:13.114568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.744 #3 NEW cov: 12248 ft: 13298 corp: 2/3b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 CrossOver- 00:07:07.744 [2024-12-16 12:27:13.174627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.744 [2024-12-16 12:27:13.174653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.744 [2024-12-16 12:27:13.174716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.744 [2024-12-16 12:27:13.174730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.744 #4 NEW cov: 12254 ft: 13661 corp: 3/5b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 ChangeBinInt- 00:07:07.744 [2024-12-16 12:27:13.234586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.744 [2024-12-16 12:27:13.234617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.744 #5 NEW cov: 12339 ft: 13916 corp: 4/6b lim: 5 exec/s: 0 rss: 71Mb L: 1/2 MS: 1 CrossOver- 00:07:07.744 [2024-12-16 12:27:13.274892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.744 [2024-12-16 12:27:13.274918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.744 [2024-12-16 12:27:13.274977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.744 [2024-12-16 12:27:13.274991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.003 #6 NEW cov: 12339 ft: 14028 corp: 5/8b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 CopyPart- 00:07:08.003 [2024-12-16 12:27:13.335053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.003 [2024-12-16 12:27:13.335078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.003 [2024-12-16 12:27:13.335137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.003 [2024-12-16 12:27:13.335151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.003 #7 NEW cov: 12339 ft: 14135 corp: 6/10b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 ShuffleBytes- 00:07:08.003 [2024-12-16 12:27:13.374997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.003 [2024-12-16 12:27:13.375023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.003 #8 NEW cov: 12339 ft: 14164 corp: 7/11b lim: 5 exec/s: 0 rss: 71Mb L: 1/2 MS: 1 ShuffleBytes- 00:07:08.003 [2024-12-16 12:27:13.415242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.003 [2024-12-16 12:27:13.415268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.003 [2024-12-16 12:27:13.415328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.003 [2024-12-16 12:27:13.415345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.003 #9 NEW cov: 12339 ft: 14213 corp: 8/13b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 InsertByte- 00:07:08.003 [2024-12-16 12:27:13.475744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.003 [2024-12-16 12:27:13.475769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.003 [2024-12-16 12:27:13.475828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.003 [2024-12-16 12:27:13.475842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.003 [2024-12-16 12:27:13.475902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.003 [2024-12-16 12:27:13.475916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.003 [2024-12-16 12:27:13.475974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.003 [2024-12-16 12:27:13.475987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.003 #10 NEW cov: 12339 ft: 14579 corp: 9/17b lim: 5 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 CopyPart- 00:07:08.003 [2024-12-16 12:27:13.535579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.003 [2024-12-16 12:27:13.535606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.003 [2024-12-16 12:27:13.535669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.003 [2024-12-16 12:27:13.535682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.263 #11 NEW cov: 12339 ft: 14649 corp: 10/19b lim: 5 exec/s: 0 rss: 71Mb L: 2/4 MS: 1 ChangeBit- 00:07:08.263 [2024-12-16 12:27:13.595786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.263 [2024-12-16 12:27:13.595813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.263 [2024-12-16 12:27:13.595871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.263 [2024-12-16 12:27:13.595885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.263 #12 NEW cov: 12339 ft: 14657 corp: 11/21b lim: 5 exec/s: 0 rss: 71Mb L: 2/4 MS: 1 CopyPart- 00:07:08.263 [2024-12-16 12:27:13.635853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.263 [2024-12-16 12:27:13.635878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.263 [2024-12-16 12:27:13.635938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.263 [2024-12-16 12:27:13.635952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.263 #13 NEW cov: 12339 ft: 14713 corp: 12/23b lim: 5 exec/s: 0 rss: 71Mb L: 2/4 MS: 1 CrossOver- 00:07:08.263 [2024-12-16 12:27:13.675956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.263 [2024-12-16 12:27:13.675983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.263 [2024-12-16 12:27:13.676045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.263 [2024-12-16 12:27:13.676059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.263 #14 NEW cov: 12339 ft: 14727 corp: 13/25b lim: 5 exec/s: 0 rss: 71Mb L: 2/4 MS: 1 ChangeByte- 00:07:08.263 [2024-12-16 12:27:13.736301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.263 [2024-12-16 12:27:13.736327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.263 [2024-12-16 12:27:13.736387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.263 [2024-12-16 12:27:13.736401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.263 [2024-12-16 12:27:13.736459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.263 [2024-12-16 12:27:13.736472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.263 #15 NEW cov: 12339 ft: 14892 corp: 14/28b lim: 5 exec/s: 0 rss: 71Mb L: 3/4 MS: 1 CopyPart- 00:07:08.263 [2024-12-16 12:27:13.776416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.263 [2024-12-16 12:27:13.776442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.263 [2024-12-16 12:27:13.776502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.263 [2024-12-16 12:27:13.776516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.263 [2024-12-16 12:27:13.776572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.263 [2024-12-16 12:27:13.776585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.263 #16 NEW cov: 12339 ft: 14920 corp: 15/31b lim: 5 exec/s: 0 rss: 71Mb L: 3/4 MS: 1 CrossOver- 00:07:08.263 [2024-12-16 12:27:13.816543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.263 [2024-12-16 12:27:13.816568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.263 [2024-12-16 12:27:13.816634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.263 [2024-12-16 12:27:13.816649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.263 [2024-12-16 12:27:13.816707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.263 [2024-12-16 12:27:13.816723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.523 #17 NEW cov: 12339 ft: 14945 corp: 16/34b lim: 5 exec/s: 0 rss: 71Mb L: 3/4 MS: 1 InsertByte- 00:07:08.523 [2024-12-16 12:27:13.856448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.523 [2024-12-16 12:27:13.856474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.523 [2024-12-16 12:27:13.856533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.523 [2024-12-16 12:27:13.856547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.523 #18 NEW cov: 12339 ft: 14955 corp: 17/36b lim: 5 exec/s: 0 rss: 71Mb L: 2/4 MS: 1 ChangeBit- 00:07:08.523 [2024-12-16 12:27:13.896423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.523 [2024-12-16 12:27:13.896448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.523 #19 NEW cov: 12339 ft: 14977 corp: 18/37b lim: 5 exec/s: 0 rss: 71Mb L: 1/4 MS: 1 ShuffleBytes- 00:07:08.523 [2024-12-16 12:27:13.936527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.523 [2024-12-16 12:27:13.936552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.782 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:08.782 #20 NEW cov: 12362 ft: 15030 corp: 19/38b lim: 5 exec/s: 20 rss: 73Mb L: 1/4 MS: 1 EraseBytes- 00:07:08.782 [2024-12-16 12:27:14.247292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.782 [2024-12-16 12:27:14.247324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.782 #21 NEW cov: 12362 ft: 15092 corp: 20/39b lim: 5 exec/s: 21 rss: 73Mb L: 1/4 MS: 1 ShuffleBytes- 00:07:08.782 [2024-12-16 12:27:14.287332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.782 [2024-12-16 12:27:14.287358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.782 #22 NEW cov: 12362 ft: 15184 corp: 21/40b lim: 5 exec/s: 22 rss: 73Mb L: 1/4 MS: 1 EraseBytes- 00:07:08.782 [2024-12-16 12:27:14.327584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.782 [2024-12-16 12:27:14.327616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.782 [2024-12-16 12:27:14.327673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.782 [2024-12-16 12:27:14.327687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.041 #23 NEW cov: 12362 ft: 15196 corp: 22/42b lim: 5 exec/s: 23 rss: 73Mb L: 2/4 MS: 1 ChangeBinInt- 00:07:09.041 [2024-12-16 12:27:14.367571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.041 [2024-12-16 12:27:14.367597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.041 #24 NEW cov: 12362 ft: 15197 corp: 23/43b lim: 5 exec/s: 24 rss: 73Mb L: 1/4 MS: 1 CrossOver- 00:07:09.041 [2024-12-16 12:27:14.427740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.041 [2024-12-16 12:27:14.427766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.042 #25 NEW cov: 12362 ft: 15207 corp: 24/44b lim: 5 exec/s: 25 rss: 73Mb L: 1/4 MS: 1 ChangeByte- 00:07:09.042 [2024-12-16 12:27:14.467988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.042 [2024-12-16 12:27:14.468014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.042 [2024-12-16 12:27:14.468071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.042 [2024-12-16 12:27:14.468085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.042 #26 NEW cov: 12362 ft: 15253 corp: 25/46b lim: 5 exec/s: 26 rss: 73Mb L: 2/4 MS: 1 ChangeByte- 00:07:09.042 [2024-12-16 12:27:14.528024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.042 [2024-12-16 12:27:14.528050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.042 #27 NEW cov: 12362 ft: 15271 corp: 26/47b lim: 5 exec/s: 27 rss: 73Mb L: 1/4 MS: 1 ChangeByte- 00:07:09.042 [2024-12-16 12:27:14.568276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.042 [2024-12-16 12:27:14.568302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.042 [2024-12-16 12:27:14.568358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.042 [2024-12-16 12:27:14.568372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.301 #28 NEW cov: 12362 ft: 15304 corp: 27/49b lim: 5 exec/s: 28 rss: 73Mb L: 2/4 MS: 1 ShuffleBytes- 00:07:09.301 [2024-12-16 12:27:14.628438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.301 [2024-12-16 12:27:14.628464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.301 [2024-12-16 12:27:14.628524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.301 [2024-12-16 12:27:14.628538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.301 #29 NEW cov: 12362 ft: 15321 corp: 28/51b lim: 5 exec/s: 29 rss: 73Mb L: 2/4 MS: 1 InsertByte- 00:07:09.301 [2024-12-16 12:27:14.688624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.301 [2024-12-16 12:27:14.688650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.301 [2024-12-16 12:27:14.688708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.301 [2024-12-16 12:27:14.688722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.301 #30 NEW cov: 12362 ft: 15332 corp: 29/53b lim: 5 exec/s: 30 rss: 73Mb L: 2/4 MS: 1 CrossOver- 00:07:09.301 [2024-12-16 12:27:14.748628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.301 [2024-12-16 12:27:14.748654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.301 #31 NEW cov: 12362 ft: 15360 corp: 30/54b lim: 5 exec/s: 31 rss: 73Mb L: 1/4 MS: 1 ChangeByte- 00:07:09.301 [2024-12-16 12:27:14.788862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.301 [2024-12-16 12:27:14.788888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.301 [2024-12-16 12:27:14.788942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.301 [2024-12-16 12:27:14.788956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.301 #32 NEW cov: 12362 ft: 15398 corp: 31/56b lim: 5 exec/s: 32 rss: 73Mb L: 2/4 MS: 1 ChangeBinInt- 00:07:09.301 [2024-12-16 12:27:14.828969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.301 [2024-12-16 12:27:14.828995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.301 [2024-12-16 12:27:14.829052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.301 [2024-12-16 12:27:14.829066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.301 #33 NEW cov: 12362 ft: 15403 corp: 32/58b lim: 5 exec/s: 33 rss: 73Mb L: 2/4 MS: 1 ShuffleBytes- 00:07:09.561 [2024-12-16 12:27:14.868967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.561 [2024-12-16 12:27:14.868993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.561 #34 NEW cov: 12362 ft: 15417 corp: 33/59b lim: 5 exec/s: 34 rss: 73Mb L: 1/4 MS: 1 ShuffleBytes- 00:07:09.561 [2024-12-16 12:27:14.909057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.561 [2024-12-16 12:27:14.909082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.561 #35 NEW cov: 12362 ft: 15425 corp: 34/60b lim: 5 exec/s: 35 rss: 73Mb L: 1/4 MS: 1 EraseBytes- 00:07:09.561 [2024-12-16 12:27:14.969684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.561 [2024-12-16 12:27:14.969710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.561 [2024-12-16 12:27:14.969767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.561 [2024-12-16 12:27:14.969780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.561 [2024-12-16 12:27:14.969836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.561 [2024-12-16 12:27:14.969850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.561 [2024-12-16 12:27:14.969910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.561 [2024-12-16 12:27:14.969924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.561 #36 NEW cov: 12362 ft: 15435 corp: 35/64b lim: 5 exec/s: 36 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:07:09.561 [2024-12-16 12:27:15.029998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.561 [2024-12-16 12:27:15.030023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.561 [2024-12-16 12:27:15.030079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.561 [2024-12-16 12:27:15.030093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.561 [2024-12-16 12:27:15.030150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.561 [2024-12-16 12:27:15.030163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.561 [2024-12-16 12:27:15.030220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.561 [2024-12-16 12:27:15.030233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.561 [2024-12-16 12:27:15.030288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.561 [2024-12-16 12:27:15.030301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:09.561 #37 NEW cov: 12362 ft: 15480 corp: 36/69b lim: 5 exec/s: 37 rss: 74Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:09.561 [2024-12-16 12:27:15.069510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.561 [2024-12-16 12:27:15.069534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.561 #38 NEW cov: 12362 ft: 15482 corp: 37/70b lim: 5 exec/s: 19 rss: 74Mb L: 1/5 MS: 1 CrossOver- 00:07:09.561 #38 DONE cov: 12362 ft: 15482 corp: 37/70b lim: 5 exec/s: 19 rss: 74Mb 00:07:09.561 Done 38 runs in 2 second(s) 00:07:09.820 12:27:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:07:09.820 12:27:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:09.820 12:27:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:09.820 12:27:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:07:09.820 12:27:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:07:09.820 12:27:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:09.820 12:27:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:09.820 12:27:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:09.820 12:27:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:07:09.820 12:27:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:09.820 12:27:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:09.820 12:27:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:07:09.820 12:27:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:07:09.820 12:27:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:09.820 12:27:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:07:09.820 12:27:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:09.820 12:27:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:09.820 12:27:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:09.820 12:27:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:07:09.820 [2024-12-16 12:27:15.260439] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:09.820 [2024-12-16 12:27:15.260509] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid987903 ] 00:07:10.079 [2024-12-16 12:27:15.521187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.079 [2024-12-16 12:27:15.577366] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.079 [2024-12-16 12:27:15.636462] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.338 [2024-12-16 12:27:15.652789] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:07:10.338 INFO: Running with entropic power schedule (0xFF, 100). 00:07:10.338 INFO: Seed: 2563465830 00:07:10.338 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:10.338 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:10.338 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:10.338 INFO: A corpus is not provided, starting from an empty corpus 00:07:10.338 [2024-12-16 12:27:15.719819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.338 [2024-12-16 12:27:15.719858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.338 #2 INITED cov: 12135 ft: 12134 corp: 1/1b exec/s: 0 rss: 71Mb 00:07:10.338 [2024-12-16 12:27:15.770025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.338 [2024-12-16 12:27:15.770054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.338 #3 NEW cov: 12248 ft: 12651 corp: 2/2b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ChangeBit- 00:07:10.338 [2024-12-16 12:27:15.840403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.338 [2024-12-16 12:27:15.840430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.338 #4 NEW cov: 12254 ft: 12977 corp: 3/3b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ChangeByte- 00:07:10.597 [2024-12-16 12:27:15.910693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.597 [2024-12-16 12:27:15.910720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.597 #5 NEW cov: 12339 ft: 13301 corp: 4/4b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 CopyPart- 00:07:10.597 [2024-12-16 12:27:15.981679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.597 [2024-12-16 12:27:15.981706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.597 [2024-12-16 12:27:15.981775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.597 [2024-12-16 12:27:15.981789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.597 [2024-12-16 12:27:15.981858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.597 [2024-12-16 12:27:15.981872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.597 #6 NEW cov: 12339 ft: 14065 corp: 5/7b lim: 5 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 CMP- DE: "\006\000"- 00:07:10.597 [2024-12-16 12:27:16.031111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.597 [2024-12-16 12:27:16.031138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.597 #7 NEW cov: 12339 ft: 14188 corp: 6/8b lim: 5 exec/s: 0 rss: 72Mb L: 1/3 MS: 1 ChangeByte- 00:07:10.597 [2024-12-16 12:27:16.082102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.597 [2024-12-16 12:27:16.082127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.597 [2024-12-16 12:27:16.082199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.597 [2024-12-16 12:27:16.082213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.597 [2024-12-16 12:27:16.082284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.597 [2024-12-16 12:27:16.082299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.597 #8 NEW cov: 12339 ft: 14248 corp: 7/11b lim: 5 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 ChangeBinInt- 00:07:10.597 [2024-12-16 12:27:16.152192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.597 [2024-12-16 12:27:16.152219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.597 [2024-12-16 12:27:16.152289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.597 [2024-12-16 12:27:16.152303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.857 #9 NEW cov: 12339 ft: 14485 corp: 8/13b lim: 5 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 CrossOver- 00:07:10.857 [2024-12-16 12:27:16.222134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.857 [2024-12-16 12:27:16.222159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.857 #10 NEW cov: 12339 ft: 14513 corp: 9/14b lim: 5 exec/s: 0 rss: 72Mb L: 1/3 MS: 1 ShuffleBytes- 00:07:10.857 [2024-12-16 12:27:16.292555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.857 [2024-12-16 12:27:16.292584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.857 #11 NEW cov: 12339 ft: 14613 corp: 10/15b lim: 5 exec/s: 0 rss: 72Mb L: 1/3 MS: 1 EraseBytes- 00:07:10.857 [2024-12-16 12:27:16.362842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.857 [2024-12-16 12:27:16.362868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.857 #12 NEW cov: 12339 ft: 14649 corp: 11/16b lim: 5 exec/s: 0 rss: 72Mb L: 1/3 MS: 1 ChangeByte- 00:07:10.857 [2024-12-16 12:27:16.414011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.857 [2024-12-16 12:27:16.414037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.857 [2024-12-16 12:27:16.414114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.857 [2024-12-16 12:27:16.414130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.857 [2024-12-16 12:27:16.414216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.857 [2024-12-16 12:27:16.414231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.117 #13 NEW cov: 12339 ft: 14672 corp: 12/19b lim: 5 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 ChangeByte- 00:07:11.117 [2024-12-16 12:27:16.464727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.117 [2024-12-16 12:27:16.464751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.117 [2024-12-16 12:27:16.464825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.117 [2024-12-16 12:27:16.464840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.117 [2024-12-16 12:27:16.464916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.117 [2024-12-16 12:27:16.464931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.117 [2024-12-16 12:27:16.465005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.117 [2024-12-16 12:27:16.465020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.117 #14 NEW cov: 12339 ft: 15008 corp: 13/23b lim: 5 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:07:11.117 [2024-12-16 12:27:16.535301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.117 [2024-12-16 12:27:16.535328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.117 [2024-12-16 12:27:16.535398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.117 [2024-12-16 12:27:16.535412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.117 [2024-12-16 12:27:16.535482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.117 [2024-12-16 12:27:16.535497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.117 [2024-12-16 12:27:16.535570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.117 [2024-12-16 12:27:16.535584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.117 [2024-12-16 12:27:16.535646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.117 [2024-12-16 12:27:16.535671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:11.117 #15 NEW cov: 12339 ft: 15106 corp: 14/28b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CMP- DE: "\000\000\000\257"- 00:07:11.117 [2024-12-16 12:27:16.585110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.117 [2024-12-16 12:27:16.585135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.117 [2024-12-16 12:27:16.585211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.117 [2024-12-16 12:27:16.585225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.117 [2024-12-16 12:27:16.585297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.117 [2024-12-16 12:27:16.585311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.117 [2024-12-16 12:27:16.585377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.117 [2024-12-16 12:27:16.585390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.376 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:11.376 #16 NEW cov: 12362 ft: 15166 corp: 15/32b lim: 5 exec/s: 16 rss: 74Mb L: 4/5 MS: 1 CrossOver- 00:07:11.376 [2024-12-16 12:27:16.894352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.376 [2024-12-16 12:27:16.894386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.376 #17 NEW cov: 12362 ft: 15410 corp: 16/33b lim: 5 exec/s: 17 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:11.636 [2024-12-16 12:27:16.944786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.636 [2024-12-16 12:27:16.944813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.636 [2024-12-16 12:27:16.944951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.636 [2024-12-16 12:27:16.944969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.636 #18 NEW cov: 12362 ft: 15482 corp: 17/35b lim: 5 exec/s: 18 rss: 74Mb L: 2/5 MS: 1 CrossOver- 00:07:11.636 [2024-12-16 12:27:17.015859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.636 [2024-12-16 12:27:17.015887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.636 [2024-12-16 12:27:17.016031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.636 [2024-12-16 12:27:17.016050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.636 [2024-12-16 12:27:17.016183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.636 [2024-12-16 12:27:17.016200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.636 [2024-12-16 12:27:17.016357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.636 [2024-12-16 12:27:17.016372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.636 [2024-12-16 12:27:17.016516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.636 [2024-12-16 12:27:17.016533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:11.636 #19 NEW cov: 12362 ft: 15514 corp: 18/40b lim: 5 exec/s: 19 rss: 74Mb L: 5/5 MS: 1 PersAutoDict- DE: "\000\000\000\257"- 00:07:11.636 [2024-12-16 12:27:17.084935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.636 [2024-12-16 12:27:17.084964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.636 #20 NEW cov: 12362 ft: 15564 corp: 19/41b lim: 5 exec/s: 20 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:11.636 [2024-12-16 12:27:17.134995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.636 [2024-12-16 12:27:17.135023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.636 #21 NEW cov: 12362 ft: 15621 corp: 20/42b lim: 5 exec/s: 21 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:11.636 [2024-12-16 12:27:17.185489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.636 [2024-12-16 12:27:17.185518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.636 [2024-12-16 12:27:17.185670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.636 [2024-12-16 12:27:17.185688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.895 #22 NEW cov: 12362 ft: 15711 corp: 21/44b lim: 5 exec/s: 22 rss: 74Mb L: 2/5 MS: 1 EraseBytes- 00:07:11.895 [2024-12-16 12:27:17.256065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.895 [2024-12-16 12:27:17.256094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.895 [2024-12-16 12:27:17.256235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.895 [2024-12-16 12:27:17.256255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.895 [2024-12-16 12:27:17.256393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.895 [2024-12-16 12:27:17.256409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.895 #23 NEW cov: 12362 ft: 15750 corp: 22/47b lim: 5 exec/s: 23 rss: 74Mb L: 3/5 MS: 1 InsertByte- 00:07:11.895 [2024-12-16 12:27:17.326810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.895 [2024-12-16 12:27:17.326837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.895 [2024-12-16 12:27:17.326967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.895 [2024-12-16 12:27:17.326985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.895 [2024-12-16 12:27:17.327120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.895 [2024-12-16 12:27:17.327139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.895 [2024-12-16 12:27:17.327265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.895 [2024-12-16 12:27:17.327284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.895 [2024-12-16 12:27:17.327420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.895 [2024-12-16 12:27:17.327437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:11.895 #24 NEW cov: 12362 ft: 15765 corp: 23/52b lim: 5 exec/s: 24 rss: 74Mb L: 5/5 MS: 1 ChangeBinInt- 00:07:11.895 [2024-12-16 12:27:17.396556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.895 [2024-12-16 12:27:17.396586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.895 [2024-12-16 12:27:17.396715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.896 [2024-12-16 12:27:17.396733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.896 [2024-12-16 12:27:17.396860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.896 [2024-12-16 12:27:17.396878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.896 #25 NEW cov: 12362 ft: 15782 corp: 24/55b lim: 5 exec/s: 25 rss: 74Mb L: 3/5 MS: 1 CopyPart- 00:07:12.155 [2024-12-16 12:27:17.467333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.155 [2024-12-16 12:27:17.467361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.155 [2024-12-16 12:27:17.467493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.155 [2024-12-16 12:27:17.467514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.155 [2024-12-16 12:27:17.467648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.155 [2024-12-16 12:27:17.467664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.155 [2024-12-16 12:27:17.467795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.155 [2024-12-16 12:27:17.467813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.155 [2024-12-16 12:27:17.467947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.155 [2024-12-16 12:27:17.467965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:12.155 #26 NEW cov: 12362 ft: 15791 corp: 25/60b lim: 5 exec/s: 26 rss: 75Mb L: 5/5 MS: 1 CrossOver- 00:07:12.155 [2024-12-16 12:27:17.537243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.155 [2024-12-16 12:27:17.537270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.155 [2024-12-16 12:27:17.537423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.155 [2024-12-16 12:27:17.537441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.155 [2024-12-16 12:27:17.537569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.155 [2024-12-16 12:27:17.537586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.155 [2024-12-16 12:27:17.537724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.155 [2024-12-16 12:27:17.537741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.156 #27 NEW cov: 12362 ft: 15808 corp: 26/64b lim: 5 exec/s: 27 rss: 75Mb L: 4/5 MS: 1 InsertByte- 00:07:12.156 [2024-12-16 12:27:17.607512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.156 [2024-12-16 12:27:17.607539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.156 [2024-12-16 12:27:17.607691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.156 [2024-12-16 12:27:17.607709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.156 [2024-12-16 12:27:17.607841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.156 [2024-12-16 12:27:17.607856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.156 [2024-12-16 12:27:17.607986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.156 [2024-12-16 12:27:17.608007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.156 #28 NEW cov: 12362 ft: 15832 corp: 27/68b lim: 5 exec/s: 28 rss: 75Mb L: 4/5 MS: 1 CrossOver- 00:07:12.156 [2024-12-16 12:27:17.656627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.156 [2024-12-16 12:27:17.656654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.156 #29 NEW cov: 12362 ft: 15842 corp: 28/69b lim: 5 exec/s: 29 rss: 75Mb L: 1/5 MS: 1 CrossOver- 00:07:12.156 [2024-12-16 12:27:17.707639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.156 [2024-12-16 12:27:17.707665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.156 [2024-12-16 12:27:17.707815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.156 [2024-12-16 12:27:17.707835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.156 [2024-12-16 12:27:17.707965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.156 [2024-12-16 12:27:17.707983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.156 [2024-12-16 12:27:17.708116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.156 [2024-12-16 12:27:17.708133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.416 #30 NEW cov: 12362 ft: 15855 corp: 29/73b lim: 5 exec/s: 15 rss: 75Mb L: 4/5 MS: 1 CrossOver- 00:07:12.416 #30 DONE cov: 12362 ft: 15855 corp: 29/73b lim: 5 exec/s: 15 rss: 75Mb 00:07:12.416 ###### Recommended dictionary. ###### 00:07:12.416 "\006\000" # Uses: 0 00:07:12.416 "\000\000\000\257" # Uses: 1 00:07:12.416 ###### End of recommended dictionary. ###### 00:07:12.416 Done 30 runs in 2 second(s) 00:07:12.416 12:27:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:07:12.416 12:27:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:12.416 12:27:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:12.416 12:27:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:07:12.416 12:27:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:07:12.416 12:27:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:12.416 12:27:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:12.416 12:27:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:12.416 12:27:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:07:12.416 12:27:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:12.416 12:27:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:12.416 12:27:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:07:12.416 12:27:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:07:12.416 12:27:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:12.416 12:27:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:07:12.416 12:27:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:12.416 12:27:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:12.416 12:27:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:12.416 12:27:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:07:12.416 [2024-12-16 12:27:17.900596] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:12.416 [2024-12-16 12:27:17.900677] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid988432 ] 00:07:12.675 [2024-12-16 12:27:18.161049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.675 [2024-12-16 12:27:18.215738] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.934 [2024-12-16 12:27:18.275179] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.934 [2024-12-16 12:27:18.291513] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:07:12.934 INFO: Running with entropic power schedule (0xFF, 100). 00:07:12.934 INFO: Seed: 906513395 00:07:12.934 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:12.934 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:12.934 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:12.934 INFO: A corpus is not provided, starting from an empty corpus 00:07:12.934 #2 INITED exec/s: 0 rss: 65Mb 00:07:12.934 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:12.934 This may also happen if the target rejected all inputs we tried so far 00:07:12.934 [2024-12-16 12:27:18.336834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a2d2d2d cdw11:2d2d2d2d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.934 [2024-12-16 12:27:18.336861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.194 NEW_FUNC[1/716]: 0x448aa8 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:07:13.194 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:13.194 #3 NEW cov: 12158 ft: 12150 corp: 2/13b lim: 40 exec/s: 0 rss: 71Mb L: 12/12 MS: 1 InsertRepeatedBytes- 00:07:13.194 [2024-12-16 12:27:18.657528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.194 [2024-12-16 12:27:18.657560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.194 #5 NEW cov: 12271 ft: 12720 corp: 3/26b lim: 40 exec/s: 0 rss: 72Mb L: 13/13 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:13.194 [2024-12-16 12:27:18.697624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:032d2d85 cdw11:85858585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.194 [2024-12-16 12:27:18.697650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.194 #9 NEW cov: 12277 ft: 12932 corp: 4/36b lim: 40 exec/s: 0 rss: 72Mb L: 10/13 MS: 4 CrossOver-ChangeBinInt-CopyPart-InsertRepeatedBytes- 00:07:13.194 [2024-12-16 12:27:18.737843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.194 [2024-12-16 12:27:18.737872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.194 [2024-12-16 12:27:18.737934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.194 [2024-12-16 12:27:18.737948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.453 #10 NEW cov: 12362 ft: 13431 corp: 5/56b lim: 40 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:07:13.453 [2024-12-16 12:27:18.778211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:032d2d85 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.453 [2024-12-16 12:27:18.778237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.453 [2024-12-16 12:27:18.778293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.454 [2024-12-16 12:27:18.778307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.454 [2024-12-16 12:27:18.778364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.454 [2024-12-16 12:27:18.778378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.454 [2024-12-16 12:27:18.778435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.454 [2024-12-16 12:27:18.778448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.454 #11 NEW cov: 12362 ft: 14121 corp: 6/95b lim: 40 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:13.454 [2024-12-16 12:27:18.838374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.454 [2024-12-16 12:27:18.838401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.454 [2024-12-16 12:27:18.838459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.454 [2024-12-16 12:27:18.838472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.454 [2024-12-16 12:27:18.838526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.454 [2024-12-16 12:27:18.838539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.454 [2024-12-16 12:27:18.838599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.454 [2024-12-16 12:27:18.838616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.454 #12 NEW cov: 12362 ft: 14161 corp: 7/130b lim: 40 exec/s: 0 rss: 72Mb L: 35/39 MS: 1 CopyPart- 00:07:13.454 [2024-12-16 12:27:18.898205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:032d2d85 cdw11:85858571 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.454 [2024-12-16 12:27:18.898231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.454 #13 NEW cov: 12362 ft: 14217 corp: 8/140b lim: 40 exec/s: 0 rss: 72Mb L: 10/39 MS: 1 ChangeBinInt- 00:07:13.454 [2024-12-16 12:27:18.938488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:032d2d85 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.454 [2024-12-16 12:27:18.938513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.454 [2024-12-16 12:27:18.938569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.454 [2024-12-16 12:27:18.938583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.454 [2024-12-16 12:27:18.938657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00008585 cdw11:85850a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.454 [2024-12-16 12:27:18.938671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.454 #14 NEW cov: 12362 ft: 14439 corp: 9/164b lim: 40 exec/s: 0 rss: 72Mb L: 24/39 MS: 1 EraseBytes- 00:07:13.454 [2024-12-16 12:27:18.998813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:032d2d85 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.454 [2024-12-16 12:27:18.998838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.454 [2024-12-16 12:27:18.998895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.454 [2024-12-16 12:27:18.998909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.454 [2024-12-16 12:27:18.998965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.454 [2024-12-16 12:27:18.998979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.454 [2024-12-16 12:27:18.999033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.454 [2024-12-16 12:27:18.999045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.713 #15 NEW cov: 12362 ft: 14528 corp: 10/203b lim: 40 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 ShuffleBytes- 00:07:13.713 [2024-12-16 12:27:19.038842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:03818181 cdw11:81818181 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.713 [2024-12-16 12:27:19.038868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.713 [2024-12-16 12:27:19.038926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:81818181 cdw11:8181812d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.713 [2024-12-16 12:27:19.038940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.713 [2024-12-16 12:27:19.038997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:2d858585 cdw11:85850a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.713 [2024-12-16 12:27:19.039011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.713 #21 NEW cov: 12362 ft: 14591 corp: 11/227b lim: 40 exec/s: 0 rss: 72Mb L: 24/39 MS: 1 InsertRepeatedBytes- 00:07:13.713 [2024-12-16 12:27:19.078645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c5c585c5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.713 [2024-12-16 12:27:19.078670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.713 #27 NEW cov: 12362 ft: 14620 corp: 12/240b lim: 40 exec/s: 0 rss: 72Mb L: 13/39 MS: 1 ChangeBit- 00:07:13.713 [2024-12-16 12:27:19.138989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a2d2d2d cdw11:2d2d2d2d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.713 [2024-12-16 12:27:19.139016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.713 [2024-12-16 12:27:19.139073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2d2d2d2d cdw11:2d2d2d2d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.713 [2024-12-16 12:27:19.139086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.713 #28 NEW cov: 12362 ft: 14642 corp: 13/262b lim: 40 exec/s: 0 rss: 72Mb L: 22/39 MS: 1 CopyPart- 00:07:13.713 [2024-12-16 12:27:19.199247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.713 [2024-12-16 12:27:19.199272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.713 [2024-12-16 12:27:19.199328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:08ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.713 [2024-12-16 12:27:19.199342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.713 [2024-12-16 12:27:19.199398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.713 [2024-12-16 12:27:19.199412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.713 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:13.713 #29 NEW cov: 12385 ft: 14664 corp: 14/289b lim: 40 exec/s: 0 rss: 72Mb L: 27/39 MS: 1 InsertRepeatedBytes- 00:07:13.713 [2024-12-16 12:27:19.239241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a2d2d2d cdw11:2d2d2d2d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.713 [2024-12-16 12:27:19.239266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.713 [2024-12-16 12:27:19.239326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2d2d2d2d cdw11:2d2d2d2d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.713 [2024-12-16 12:27:19.239340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.973 #30 NEW cov: 12385 ft: 14707 corp: 15/311b lim: 40 exec/s: 0 rss: 72Mb L: 22/39 MS: 1 CopyPart- 00:07:13.973 [2024-12-16 12:27:19.299534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:032d2d85 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.973 [2024-12-16 12:27:19.299560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.973 [2024-12-16 12:27:19.299622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.973 [2024-12-16 12:27:19.299636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.973 [2024-12-16 12:27:19.299694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00008585 cdw11:44444444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.973 [2024-12-16 12:27:19.299707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.973 #31 NEW cov: 12385 ft: 14718 corp: 16/339b lim: 40 exec/s: 31 rss: 73Mb L: 28/39 MS: 1 InsertRepeatedBytes- 00:07:13.973 [2024-12-16 12:27:19.359452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2d2d8503 cdw11:85858571 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.973 [2024-12-16 12:27:19.359477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.973 #32 NEW cov: 12385 ft: 14742 corp: 17/349b lim: 40 exec/s: 32 rss: 73Mb L: 10/39 MS: 1 ShuffleBytes- 00:07:13.973 [2024-12-16 12:27:19.419997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.973 [2024-12-16 12:27:19.420022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.973 [2024-12-16 12:27:19.420081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.973 [2024-12-16 12:27:19.420095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.973 [2024-12-16 12:27:19.420148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00720000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.973 [2024-12-16 12:27:19.420162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.973 [2024-12-16 12:27:19.420220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.973 [2024-12-16 12:27:19.420232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.973 #33 NEW cov: 12385 ft: 14784 corp: 18/385b lim: 40 exec/s: 33 rss: 73Mb L: 36/39 MS: 1 InsertByte- 00:07:13.973 [2024-12-16 12:27:19.479829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:032d2d85 cdw11:85858571 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.973 [2024-12-16 12:27:19.479854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.973 #34 NEW cov: 12385 ft: 14801 corp: 19/395b lim: 40 exec/s: 34 rss: 73Mb L: 10/39 MS: 1 ShuffleBytes- 00:07:13.973 [2024-12-16 12:27:19.520216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:032d2d85 cdw11:00000047 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.973 [2024-12-16 12:27:19.520242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.973 [2024-12-16 12:27:19.520298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:47474747 cdw11:47474747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.973 [2024-12-16 12:27:19.520311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.973 [2024-12-16 12:27:19.520368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:47470000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.973 [2024-12-16 12:27:19.520382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.973 [2024-12-16 12:27:19.520441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00858544 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.973 [2024-12-16 12:27:19.520454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.233 #35 NEW cov: 12385 ft: 14845 corp: 20/434b lim: 40 exec/s: 35 rss: 73Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:14.233 [2024-12-16 12:27:19.580402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:032d2d85 cdw11:00003b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.233 [2024-12-16 12:27:19.580430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.233 [2024-12-16 12:27:19.580486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.233 [2024-12-16 12:27:19.580500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.233 [2024-12-16 12:27:19.580558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.233 [2024-12-16 12:27:19.580572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.233 [2024-12-16 12:27:19.580631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.233 [2024-12-16 12:27:19.580644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.233 #36 NEW cov: 12385 ft: 14866 corp: 21/473b lim: 40 exec/s: 36 rss: 73Mb L: 39/39 MS: 1 ChangeByte- 00:07:14.233 [2024-12-16 12:27:19.640211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c5c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.233 [2024-12-16 12:27:19.640236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.233 #37 NEW cov: 12385 ft: 14874 corp: 22/486b lim: 40 exec/s: 37 rss: 73Mb L: 13/39 MS: 1 CopyPart- 00:07:14.233 [2024-12-16 12:27:19.680263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2d858585 cdw11:2d850371 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.233 [2024-12-16 12:27:19.680288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.233 #38 NEW cov: 12385 ft: 14955 corp: 23/496b lim: 40 exec/s: 38 rss: 73Mb L: 10/39 MS: 1 ShuffleBytes- 00:07:14.233 [2024-12-16 12:27:19.740453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:032d2d85 cdw11:85008571 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.233 [2024-12-16 12:27:19.740477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.233 #39 NEW cov: 12385 ft: 14975 corp: 24/506b lim: 40 exec/s: 39 rss: 73Mb L: 10/39 MS: 1 CrossOver- 00:07:14.492 [2024-12-16 12:27:19.801024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:032d2d85 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.492 [2024-12-16 12:27:19.801050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.492 [2024-12-16 12:27:19.801112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.492 [2024-12-16 12:27:19.801126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.492 [2024-12-16 12:27:19.801185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.492 [2024-12-16 12:27:19.801198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.492 [2024-12-16 12:27:19.801255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff8585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.492 [2024-12-16 12:27:19.801269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.492 #40 NEW cov: 12385 ft: 15025 corp: 25/542b lim: 40 exec/s: 40 rss: 73Mb L: 36/39 MS: 1 InsertRepeatedBytes- 00:07:14.492 [2024-12-16 12:27:19.840744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:032d2d85 cdw11:85878571 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.492 [2024-12-16 12:27:19.840770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.492 #41 NEW cov: 12385 ft: 15081 corp: 26/552b lim: 40 exec/s: 41 rss: 73Mb L: 10/39 MS: 1 ChangeBit- 00:07:14.493 [2024-12-16 12:27:19.880839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:032d7685 cdw11:85858571 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.493 [2024-12-16 12:27:19.880864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.493 #42 NEW cov: 12385 ft: 15092 corp: 27/562b lim: 40 exec/s: 42 rss: 73Mb L: 10/39 MS: 1 ChangeByte- 00:07:14.493 [2024-12-16 12:27:19.921359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.493 [2024-12-16 12:27:19.921385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.493 [2024-12-16 12:27:19.921444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000c5 cdw11:c585c5c5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.493 [2024-12-16 12:27:19.921457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.493 [2024-12-16 12:27:19.921517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:c5c5c5c5 cdw11:c5c5c508 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.493 [2024-12-16 12:27:19.921531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.493 [2024-12-16 12:27:19.921592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.493 [2024-12-16 12:27:19.921605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.493 #43 NEW cov: 12385 ft: 15106 corp: 28/595b lim: 40 exec/s: 43 rss: 73Mb L: 33/39 MS: 1 CrossOver- 00:07:14.493 [2024-12-16 12:27:19.961445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f7000000 cdw11:00000009 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.493 [2024-12-16 12:27:19.961471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.493 [2024-12-16 12:27:19.961528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.493 [2024-12-16 12:27:19.961542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.493 [2024-12-16 12:27:19.961599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.493 [2024-12-16 12:27:19.961617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.493 [2024-12-16 12:27:19.961676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.493 [2024-12-16 12:27:19.961690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.493 #44 NEW cov: 12385 ft: 15146 corp: 29/630b lim: 40 exec/s: 44 rss: 73Mb L: 35/39 MS: 1 ChangeBinInt- 00:07:14.493 [2024-12-16 12:27:20.001355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a2d2d2d cdw11:2d2d2d2d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.493 [2024-12-16 12:27:20.001381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.493 [2024-12-16 12:27:20.001441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2d2d2d2d cdw11:2d2d2dd5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.493 [2024-12-16 12:27:20.001456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.493 #45 NEW cov: 12385 ft: 15163 corp: 30/652b lim: 40 exec/s: 45 rss: 73Mb L: 22/39 MS: 1 ChangeBinInt- 00:07:14.493 [2024-12-16 12:27:20.041711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:032d2d00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.493 [2024-12-16 12:27:20.041738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.493 [2024-12-16 12:27:20.041800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.493 [2024-12-16 12:27:20.041815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.493 [2024-12-16 12:27:20.041874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.493 [2024-12-16 12:27:20.041888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.493 [2024-12-16 12:27:20.041945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff8585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.493 [2024-12-16 12:27:20.041959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.752 #46 NEW cov: 12385 ft: 15174 corp: 31/688b lim: 40 exec/s: 46 rss: 73Mb L: 36/39 MS: 1 CopyPart- 00:07:14.752 [2024-12-16 12:27:20.101956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:032d2d85 cdw11:00000047 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.752 [2024-12-16 12:27:20.101984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.752 [2024-12-16 12:27:20.102043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:47474747 cdw11:47474747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.752 [2024-12-16 12:27:20.102057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.752 [2024-12-16 12:27:20.102115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:47470000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.752 [2024-12-16 12:27:20.102129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.753 [2024-12-16 12:27:20.102184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00858544 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.753 [2024-12-16 12:27:20.102197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.753 #47 NEW cov: 12385 ft: 15195 corp: 32/727b lim: 40 exec/s: 47 rss: 73Mb L: 39/39 MS: 1 ShuffleBytes- 00:07:14.753 [2024-12-16 12:27:20.161851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.753 [2024-12-16 12:27:20.161880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.753 [2024-12-16 12:27:20.161958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.753 [2024-12-16 12:27:20.161978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.753 #48 NEW cov: 12385 ft: 15196 corp: 33/747b lim: 40 exec/s: 48 rss: 74Mb L: 20/39 MS: 1 ShuffleBytes- 00:07:14.753 [2024-12-16 12:27:20.202192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:032d2d85 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.753 [2024-12-16 12:27:20.202218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.753 [2024-12-16 12:27:20.202279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.753 [2024-12-16 12:27:20.202293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.753 [2024-12-16 12:27:20.202353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.753 [2024-12-16 12:27:20.202367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.753 [2024-12-16 12:27:20.202425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.753 [2024-12-16 12:27:20.202438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.753 [2024-12-16 12:27:20.242295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:032d2d85 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.753 [2024-12-16 12:27:20.242322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.753 [2024-12-16 12:27:20.242382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.753 [2024-12-16 12:27:20.242395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.753 [2024-12-16 12:27:20.242454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.753 [2024-12-16 12:27:20.242467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.753 [2024-12-16 12:27:20.242526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.753 [2024-12-16 12:27:20.242539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.753 #50 NEW cov: 12385 ft: 15204 corp: 34/786b lim: 40 exec/s: 50 rss: 74Mb L: 39/39 MS: 2 CrossOver-ShuffleBytes- 00:07:14.753 [2024-12-16 12:27:20.282383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:032d2d85 cdw11:00000047 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.753 [2024-12-16 12:27:20.282409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.753 [2024-12-16 12:27:20.282469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:47474747 cdw11:47474747 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.753 [2024-12-16 12:27:20.282483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.753 [2024-12-16 12:27:20.282544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:47470000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.753 [2024-12-16 12:27:20.282557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.753 [2024-12-16 12:27:20.282619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00858545 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.753 [2024-12-16 12:27:20.282633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.012 #51 NEW cov: 12385 ft: 15215 corp: 35/825b lim: 40 exec/s: 25 rss: 74Mb L: 39/39 MS: 1 ChangeBit- 00:07:15.012 #51 DONE cov: 12385 ft: 15215 corp: 35/825b lim: 40 exec/s: 25 rss: 74Mb 00:07:15.012 Done 51 runs in 2 second(s) 00:07:15.012 12:27:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:07:15.012 12:27:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:15.012 12:27:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:15.012 12:27:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:07:15.012 12:27:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:07:15.012 12:27:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:15.012 12:27:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:15.012 12:27:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:15.012 12:27:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:07:15.012 12:27:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:15.012 12:27:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:15.012 12:27:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:07:15.012 12:27:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:07:15.012 12:27:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:15.012 12:27:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:07:15.012 12:27:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:15.012 12:27:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:15.012 12:27:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:15.012 12:27:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:07:15.012 [2024-12-16 12:27:20.478408] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:15.012 [2024-12-16 12:27:20.478476] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid988899 ] 00:07:15.271 [2024-12-16 12:27:20.741203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.271 [2024-12-16 12:27:20.800904] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.531 [2024-12-16 12:27:20.860150] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.531 [2024-12-16 12:27:20.876459] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:07:15.531 INFO: Running with entropic power schedule (0xFF, 100). 00:07:15.531 INFO: Seed: 3492507811 00:07:15.531 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:15.531 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:15.531 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:15.531 INFO: A corpus is not provided, starting from an empty corpus 00:07:15.531 #2 INITED exec/s: 0 rss: 65Mb 00:07:15.531 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:15.531 This may also happen if the target rejected all inputs we tried so far 00:07:15.531 [2024-12-16 12:27:20.932326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.531 [2024-12-16 12:27:20.932355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.531 [2024-12-16 12:27:20.932415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.531 [2024-12-16 12:27:20.932429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.531 [2024-12-16 12:27:20.932485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.531 [2024-12-16 12:27:20.932499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.531 [2024-12-16 12:27:20.932557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.531 [2024-12-16 12:27:20.932570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.790 NEW_FUNC[1/717]: 0x44a818 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:07:15.790 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:15.790 #12 NEW cov: 12170 ft: 12169 corp: 2/38b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 5 InsertByte-ShuffleBytes-ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:07:15.790 [2024-12-16 12:27:21.243180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.790 [2024-12-16 12:27:21.243219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.790 [2024-12-16 12:27:21.243291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.790 [2024-12-16 12:27:21.243309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.790 [2024-12-16 12:27:21.243378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.790 [2024-12-16 12:27:21.243396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.790 [2024-12-16 12:27:21.243465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fe030000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.790 [2024-12-16 12:27:21.243483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.790 #13 NEW cov: 12283 ft: 12753 corp: 3/75b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 CMP- DE: "\376\003\000\000\000\000\000\000"- 00:07:15.790 [2024-12-16 12:27:21.303217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.790 [2024-12-16 12:27:21.303247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.790 [2024-12-16 12:27:21.303308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.790 [2024-12-16 12:27:21.303321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.790 [2024-12-16 12:27:21.303380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffaeffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.790 [2024-12-16 12:27:21.303394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.790 [2024-12-16 12:27:21.303451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fe030000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.790 [2024-12-16 12:27:21.303465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.790 #14 NEW cov: 12289 ft: 12932 corp: 4/112b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 ChangeByte- 00:07:16.050 [2024-12-16 12:27:21.363350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.050 [2024-12-16 12:27:21.363377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.050 [2024-12-16 12:27:21.363436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.050 [2024-12-16 12:27:21.363450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.050 [2024-12-16 12:27:21.363505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffaefffb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.050 [2024-12-16 12:27:21.363519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.050 [2024-12-16 12:27:21.363574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fe030000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.050 [2024-12-16 12:27:21.363587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.050 #15 NEW cov: 12374 ft: 13326 corp: 5/149b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 ChangeBit- 00:07:16.050 [2024-12-16 12:27:21.423519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffff15ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.050 [2024-12-16 12:27:21.423547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.050 [2024-12-16 12:27:21.423615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.050 [2024-12-16 12:27:21.423629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.050 [2024-12-16 12:27:21.423686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffaefffb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.050 [2024-12-16 12:27:21.423700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.050 [2024-12-16 12:27:21.423767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fe030000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.050 [2024-12-16 12:27:21.423781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.050 #16 NEW cov: 12374 ft: 13408 corp: 6/186b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 ChangeByte- 00:07:16.050 [2024-12-16 12:27:21.483665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.050 [2024-12-16 12:27:21.483690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.050 [2024-12-16 12:27:21.483750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.050 [2024-12-16 12:27:21.483764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.050 [2024-12-16 12:27:21.483818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffaeffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.050 [2024-12-16 12:27:21.483832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.050 [2024-12-16 12:27:21.483890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fe030000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.050 [2024-12-16 12:27:21.483904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.050 #17 NEW cov: 12374 ft: 13565 corp: 7/223b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 ShuffleBytes- 00:07:16.050 [2024-12-16 12:27:21.523786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.050 [2024-12-16 12:27:21.523811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.050 [2024-12-16 12:27:21.523872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffffffe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.051 [2024-12-16 12:27:21.523885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.051 [2024-12-16 12:27:21.523938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffaeffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.051 [2024-12-16 12:27:21.523952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.051 [2024-12-16 12:27:21.524010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fe030000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.051 [2024-12-16 12:27:21.524025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.051 #18 NEW cov: 12374 ft: 13615 corp: 8/260b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 ChangeBit- 00:07:16.051 [2024-12-16 12:27:21.583955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.051 [2024-12-16 12:27:21.583981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.051 [2024-12-16 12:27:21.584042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.051 [2024-12-16 12:27:21.584056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.051 [2024-12-16 12:27:21.584114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffaefffb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.051 [2024-12-16 12:27:21.584129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.051 [2024-12-16 12:27:21.584189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fe030004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.051 [2024-12-16 12:27:21.584203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.051 #19 NEW cov: 12374 ft: 13653 corp: 9/297b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 ChangeBit- 00:07:16.310 [2024-12-16 12:27:21.623776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:14e40a1f cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.310 [2024-12-16 12:27:21.623803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.310 [2024-12-16 12:27:21.623862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.310 [2024-12-16 12:27:21.623876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.310 #24 NEW cov: 12374 ft: 14086 corp: 10/318b lim: 40 exec/s: 0 rss: 72Mb L: 21/37 MS: 5 ShuffleBytes-ChangeBit-ChangeByte-ChangeBit-CrossOver- 00:07:16.310 [2024-12-16 12:27:21.664066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:14e40a1f cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.310 [2024-12-16 12:27:21.664093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.310 [2024-12-16 12:27:21.664154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.310 [2024-12-16 12:27:21.664168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.310 [2024-12-16 12:27:21.664224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.310 [2024-12-16 12:27:21.664238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.310 [2024-12-16 12:27:21.664297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.310 [2024-12-16 12:27:21.664311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.310 #25 NEW cov: 12374 ft: 14112 corp: 11/352b lim: 40 exec/s: 0 rss: 72Mb L: 34/37 MS: 1 CopyPart- 00:07:16.310 [2024-12-16 12:27:21.724360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.310 [2024-12-16 12:27:21.724387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.310 [2024-12-16 12:27:21.724447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.310 [2024-12-16 12:27:21.724461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.310 [2024-12-16 12:27:21.724517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffff2500 cdw11:0000fffb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.310 [2024-12-16 12:27:21.724532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.310 [2024-12-16 12:27:21.724589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fe030000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.310 [2024-12-16 12:27:21.724619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.310 #26 NEW cov: 12374 ft: 14169 corp: 12/389b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 ChangeBinInt- 00:07:16.310 [2024-12-16 12:27:21.764332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e4ffffff cdw11:fffffffe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.310 [2024-12-16 12:27:21.764359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.310 [2024-12-16 12:27:21.764416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffaeffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.310 [2024-12-16 12:27:21.764430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.310 [2024-12-16 12:27:21.764488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:fe030000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.310 [2024-12-16 12:27:21.764502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.310 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:16.310 #27 NEW cov: 12397 ft: 14385 corp: 13/418b lim: 40 exec/s: 0 rss: 73Mb L: 29/37 MS: 1 EraseBytes- 00:07:16.310 [2024-12-16 12:27:21.824645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.310 [2024-12-16 12:27:21.824671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.310 [2024-12-16 12:27:21.824731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.310 [2024-12-16 12:27:21.824745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.310 [2024-12-16 12:27:21.824799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffaeffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.310 [2024-12-16 12:27:21.824812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.310 [2024-12-16 12:27:21.824870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fe030000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.310 [2024-12-16 12:27:21.824884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.310 #28 NEW cov: 12397 ft: 14393 corp: 14/455b lim: 40 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 ShuffleBytes- 00:07:16.310 [2024-12-16 12:27:21.864740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.310 [2024-12-16 12:27:21.864766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.310 [2024-12-16 12:27:21.864829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.310 [2024-12-16 12:27:21.864843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.310 [2024-12-16 12:27:21.864903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffaefffb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.310 [2024-12-16 12:27:21.864917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.310 [2024-12-16 12:27:21.864972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fe030024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.311 [2024-12-16 12:27:21.864989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.570 #29 NEW cov: 12397 ft: 14414 corp: 15/492b lim: 40 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 ChangeBit- 00:07:16.570 [2024-12-16 12:27:21.924932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40ae500 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.570 [2024-12-16 12:27:21.924959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.570 [2024-12-16 12:27:21.925019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:fffffffe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.570 [2024-12-16 12:27:21.925032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.570 [2024-12-16 12:27:21.925092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffaeffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.570 [2024-12-16 12:27:21.925105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.570 [2024-12-16 12:27:21.925160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fe030000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.570 [2024-12-16 12:27:21.925173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.570 #30 NEW cov: 12397 ft: 14504 corp: 16/529b lim: 40 exec/s: 30 rss: 73Mb L: 37/37 MS: 1 ChangeBinInt- 00:07:16.570 [2024-12-16 12:27:21.965018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:f9ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.570 [2024-12-16 12:27:21.965044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.570 [2024-12-16 12:27:21.965105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.570 [2024-12-16 12:27:21.965119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.570 [2024-12-16 12:27:21.965178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.570 [2024-12-16 12:27:21.965192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.570 [2024-12-16 12:27:21.965251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.570 [2024-12-16 12:27:21.965264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.570 #31 NEW cov: 12397 ft: 14557 corp: 17/566b lim: 40 exec/s: 31 rss: 73Mb L: 37/37 MS: 1 ChangeBinInt- 00:07:16.570 [2024-12-16 12:27:22.005113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.570 [2024-12-16 12:27:22.005139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.570 [2024-12-16 12:27:22.005198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff26fffe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.570 [2024-12-16 12:27:22.005211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.570 [2024-12-16 12:27:22.005267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffaeffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.570 [2024-12-16 12:27:22.005284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.570 [2024-12-16 12:27:22.005341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fe030000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.570 [2024-12-16 12:27:22.005355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.570 #32 NEW cov: 12397 ft: 14574 corp: 18/603b lim: 40 exec/s: 32 rss: 73Mb L: 37/37 MS: 1 ChangeByte- 00:07:16.570 [2024-12-16 12:27:22.045243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.570 [2024-12-16 12:27:22.045270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.570 [2024-12-16 12:27:22.045332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:2bffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.570 [2024-12-16 12:27:22.045345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.570 [2024-12-16 12:27:22.045402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffaefffb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.570 [2024-12-16 12:27:22.045416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.570 [2024-12-16 12:27:22.045476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fe030000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.570 [2024-12-16 12:27:22.045489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.570 #33 NEW cov: 12397 ft: 14598 corp: 19/640b lim: 40 exec/s: 33 rss: 73Mb L: 37/37 MS: 1 ChangeByte- 00:07:16.570 [2024-12-16 12:27:22.085034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.570 [2024-12-16 12:27:22.085060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.570 [2024-12-16 12:27:22.085120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.570 [2024-12-16 12:27:22.085134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.570 #34 NEW cov: 12397 ft: 14623 corp: 20/663b lim: 40 exec/s: 34 rss: 73Mb L: 23/37 MS: 1 EraseBytes- 00:07:16.570 [2024-12-16 12:27:22.125149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.570 [2024-12-16 12:27:22.125175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.570 [2024-12-16 12:27:22.125234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff81ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.570 [2024-12-16 12:27:22.125248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.829 #35 NEW cov: 12397 ft: 14677 corp: 21/686b lim: 40 exec/s: 35 rss: 73Mb L: 23/37 MS: 1 ChangeByte- 00:07:16.829 [2024-12-16 12:27:22.185671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.829 [2024-12-16 12:27:22.185697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.829 [2024-12-16 12:27:22.185758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:2bffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.829 [2024-12-16 12:27:22.185771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.829 [2024-12-16 12:27:22.185831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ff3dffff cdw11:ffffaeff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.829 [2024-12-16 12:27:22.185845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.829 [2024-12-16 12:27:22.185904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:fbffffff cdw11:fffe0300 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.829 [2024-12-16 12:27:22.185918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.829 #36 NEW cov: 12397 ft: 14693 corp: 22/724b lim: 40 exec/s: 36 rss: 73Mb L: 38/38 MS: 1 InsertByte- 00:07:16.829 [2024-12-16 12:27:22.245812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.829 [2024-12-16 12:27:22.245838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.829 [2024-12-16 12:27:22.245901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:07ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.829 [2024-12-16 12:27:22.245915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.829 [2024-12-16 12:27:22.245977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffaeffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.829 [2024-12-16 12:27:22.245991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.829 [2024-12-16 12:27:22.246050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fe030000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.829 [2024-12-16 12:27:22.246064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.829 #37 NEW cov: 12397 ft: 14749 corp: 23/761b lim: 40 exec/s: 37 rss: 73Mb L: 37/38 MS: 1 CMP- DE: "\377\007"- 00:07:16.829 [2024-12-16 12:27:22.285964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.829 [2024-12-16 12:27:22.285990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.829 [2024-12-16 12:27:22.286052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:2bffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.829 [2024-12-16 12:27:22.286066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.829 [2024-12-16 12:27:22.286125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffaefffb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.829 [2024-12-16 12:27:22.286139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.829 [2024-12-16 12:27:22.286197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fe030000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.829 [2024-12-16 12:27:22.286211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.829 #38 NEW cov: 12397 ft: 14771 corp: 24/798b lim: 40 exec/s: 38 rss: 73Mb L: 37/38 MS: 1 ShuffleBytes- 00:07:16.829 [2024-12-16 12:27:22.326029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.829 [2024-12-16 12:27:22.326055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.829 [2024-12-16 12:27:22.326114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.829 [2024-12-16 12:27:22.326128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.829 [2024-12-16 12:27:22.326182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.829 [2024-12-16 12:27:22.326196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.829 [2024-12-16 12:27:22.326253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:81ffff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.829 [2024-12-16 12:27:22.326266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.829 #39 NEW cov: 12397 ft: 14780 corp: 25/831b lim: 40 exec/s: 39 rss: 73Mb L: 33/38 MS: 1 InsertRepeatedBytes- 00:07:16.829 [2024-12-16 12:27:22.385900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.829 [2024-12-16 12:27:22.385925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.829 [2024-12-16 12:27:22.385986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.829 [2024-12-16 12:27:22.386000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.089 #40 NEW cov: 12397 ft: 14789 corp: 26/854b lim: 40 exec/s: 40 rss: 73Mb L: 23/38 MS: 1 CopyPart- 00:07:17.089 [2024-12-16 12:27:22.426345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:fffff7ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.426371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.089 [2024-12-16 12:27:22.426433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.426447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.089 [2024-12-16 12:27:22.426505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffaeffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.426518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.089 [2024-12-16 12:27:22.426593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fe030000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.426614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.089 #41 NEW cov: 12397 ft: 14814 corp: 27/891b lim: 40 exec/s: 41 rss: 73Mb L: 37/38 MS: 1 ChangeBit- 00:07:17.089 [2024-12-16 12:27:22.466449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.466475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.089 [2024-12-16 12:27:22.466539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:fffffbff cdw11:fffffffe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.466554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.089 [2024-12-16 12:27:22.466614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffaeffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.466628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.089 [2024-12-16 12:27:22.466684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fe030000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.466697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.089 #42 NEW cov: 12397 ft: 14826 corp: 28/928b lim: 40 exec/s: 42 rss: 73Mb L: 37/38 MS: 1 ChangeBit- 00:07:17.089 [2024-12-16 12:27:22.506520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e4230a1f cdw11:fff9ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.506547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.089 [2024-12-16 12:27:22.506614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.506629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.089 [2024-12-16 12:27:22.506689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.506703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.089 [2024-12-16 12:27:22.506759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.506772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.089 #43 NEW cov: 12397 ft: 14846 corp: 29/966b lim: 40 exec/s: 43 rss: 73Mb L: 38/38 MS: 1 InsertByte- 00:07:17.089 [2024-12-16 12:27:22.566514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.566540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.089 [2024-12-16 12:27:22.566601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.566621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.089 [2024-12-16 12:27:22.566679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ff000000 cdw11:0000fff8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.566693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.089 #44 NEW cov: 12397 ft: 14911 corp: 30/994b lim: 40 exec/s: 44 rss: 73Mb L: 28/38 MS: 1 InsertRepeatedBytes- 00:07:17.089 [2024-12-16 12:27:22.606485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.606510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.089 [2024-12-16 12:27:22.606574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.606588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.089 #45 NEW cov: 12397 ft: 14926 corp: 31/1015b lim: 40 exec/s: 45 rss: 73Mb L: 21/38 MS: 1 EraseBytes- 00:07:17.089 [2024-12-16 12:27:22.646954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40ae500 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.646980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.089 [2024-12-16 12:27:22.647039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:600000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.647053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.089 [2024-12-16 12:27:22.647111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:feffffff cdw11:ffffaeff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.647126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.089 [2024-12-16 12:27:22.647185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:fffe0300 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.089 [2024-12-16 12:27:22.647199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.348 #46 NEW cov: 12397 ft: 14945 corp: 32/1053b lim: 40 exec/s: 46 rss: 73Mb L: 38/38 MS: 1 InsertByte- 00:07:17.348 [2024-12-16 12:27:22.707272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:14e40a1f cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.348 [2024-12-16 12:27:22.707297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.348 [2024-12-16 12:27:22.707358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.348 [2024-12-16 12:27:22.707372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.348 [2024-12-16 12:27:22.707430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.348 [2024-12-16 12:27:22.707444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.348 [2024-12-16 12:27:22.707503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.348 [2024-12-16 12:27:22.707516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.348 [2024-12-16 12:27:22.707574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffe40a1f cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.348 [2024-12-16 12:27:22.707587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:17.348 #47 NEW cov: 12397 ft: 14992 corp: 33/1093b lim: 40 exec/s: 47 rss: 73Mb L: 40/40 MS: 1 CrossOver- 00:07:17.349 [2024-12-16 12:27:22.767322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1f01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.349 [2024-12-16 12:27:22.767349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.349 [2024-12-16 12:27:22.767413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:000006ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.349 [2024-12-16 12:27:22.767427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.349 [2024-12-16 12:27:22.767485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.349 [2024-12-16 12:27:22.767499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.349 [2024-12-16 12:27:22.767559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.349 [2024-12-16 12:27:22.767572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.349 #48 NEW cov: 12397 ft: 15002 corp: 34/1130b lim: 40 exec/s: 48 rss: 73Mb L: 37/40 MS: 1 ChangeBinInt- 00:07:17.349 [2024-12-16 12:27:22.807245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e4ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.349 [2024-12-16 12:27:22.807272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.349 [2024-12-16 12:27:22.807334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffaefffb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.349 [2024-12-16 12:27:22.807348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.349 [2024-12-16 12:27:22.807409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:fe030024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.349 [2024-12-16 12:27:22.807423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.349 #49 NEW cov: 12397 ft: 15031 corp: 35/1159b lim: 40 exec/s: 49 rss: 74Mb L: 29/40 MS: 1 EraseBytes- 00:07:17.349 [2024-12-16 12:27:22.867545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e40a1fff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.349 [2024-12-16 12:27:22.867571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.349 [2024-12-16 12:27:22.867632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.349 [2024-12-16 12:27:22.867646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.349 [2024-12-16 12:27:22.867706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:0021ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.349 [2024-12-16 12:27:22.867720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.349 [2024-12-16 12:27:22.867778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:81ffff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.349 [2024-12-16 12:27:22.867791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.349 #50 NEW cov: 12397 ft: 15032 corp: 36/1192b lim: 40 exec/s: 50 rss: 74Mb L: 33/40 MS: 1 ChangeBinInt- 00:07:17.609 [2024-12-16 12:27:22.927229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:a3290c00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.609 [2024-12-16 12:27:22.927255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.609 #54 NEW cov: 12397 ft: 15701 corp: 37/1202b lim: 40 exec/s: 27 rss: 74Mb L: 10/40 MS: 4 ChangeByte-ChangeBit-InsertByte-CMP- DE: "\014\000\000\000\000\000\000\000"- 00:07:17.609 #54 DONE cov: 12397 ft: 15701 corp: 37/1202b lim: 40 exec/s: 27 rss: 74Mb 00:07:17.609 ###### Recommended dictionary. ###### 00:07:17.609 "\376\003\000\000\000\000\000\000" # Uses: 0 00:07:17.609 "\377\007" # Uses: 0 00:07:17.609 "\014\000\000\000\000\000\000\000" # Uses: 0 00:07:17.609 ###### End of recommended dictionary. ###### 00:07:17.609 Done 54 runs in 2 second(s) 00:07:17.609 12:27:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:07:17.609 12:27:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:17.609 12:27:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:17.609 12:27:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:07:17.609 12:27:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:07:17.609 12:27:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:17.609 12:27:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:17.609 12:27:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:17.609 12:27:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:07:17.609 12:27:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:17.609 12:27:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:17.609 12:27:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:07:17.609 12:27:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:07:17.609 12:27:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:17.609 12:27:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:07:17.609 12:27:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:17.609 12:27:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:17.609 12:27:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:17.609 12:27:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:07:17.609 [2024-12-16 12:27:23.103372] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:17.609 [2024-12-16 12:27:23.103440] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid989264 ] 00:07:17.868 [2024-12-16 12:27:23.294140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.868 [2024-12-16 12:27:23.327593] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.868 [2024-12-16 12:27:23.386732] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.868 [2024-12-16 12:27:23.403051] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:07:17.868 INFO: Running with entropic power schedule (0xFF, 100). 00:07:17.868 INFO: Seed: 1724535438 00:07:18.126 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:18.126 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:18.126 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:18.126 INFO: A corpus is not provided, starting from an empty corpus 00:07:18.126 #2 INITED exec/s: 0 rss: 65Mb 00:07:18.126 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:18.127 This may also happen if the target rejected all inputs we tried so far 00:07:18.127 [2024-12-16 12:27:23.479153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.127 [2024-12-16 12:27:23.479189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.385 NEW_FUNC[1/717]: 0x44c588 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:18.385 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:18.385 #23 NEW cov: 12168 ft: 12161 corp: 2/13b lim: 40 exec/s: 0 rss: 72Mb L: 12/12 MS: 1 InsertRepeatedBytes- 00:07:18.385 [2024-12-16 12:27:23.820103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.385 [2024-12-16 12:27:23.820142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.385 #24 NEW cov: 12281 ft: 12873 corp: 3/25b lim: 40 exec/s: 0 rss: 72Mb L: 12/12 MS: 1 ChangeBinInt- 00:07:18.385 [2024-12-16 12:27:23.890256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.385 [2024-12-16 12:27:23.890286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.385 #31 NEW cov: 12287 ft: 13004 corp: 4/35b lim: 40 exec/s: 0 rss: 72Mb L: 10/12 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:07:18.385 [2024-12-16 12:27:23.940340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.385 [2024-12-16 12:27:23.940370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.644 #32 NEW cov: 12372 ft: 13213 corp: 5/45b lim: 40 exec/s: 0 rss: 72Mb L: 10/12 MS: 1 ShuffleBytes- 00:07:18.644 [2024-12-16 12:27:24.000508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30010200 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.644 [2024-12-16 12:27:24.000538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.644 #33 NEW cov: 12372 ft: 13430 corp: 6/59b lim: 40 exec/s: 0 rss: 72Mb L: 14/14 MS: 1 CMP- DE: "\001\002\000\000"- 00:07:18.644 [2024-12-16 12:27:24.060648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.644 [2024-12-16 12:27:24.060675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.644 #34 NEW cov: 12372 ft: 13545 corp: 7/69b lim: 40 exec/s: 0 rss: 72Mb L: 10/14 MS: 1 ChangeASCIIInt- 00:07:18.644 [2024-12-16 12:27:24.101348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.644 [2024-12-16 12:27:24.101375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.644 [2024-12-16 12:27:24.101511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.644 [2024-12-16 12:27:24.101528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.644 [2024-12-16 12:27:24.101656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffff0c cdw11:ff0aff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.644 [2024-12-16 12:27:24.101676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.644 #35 NEW cov: 12372 ft: 14313 corp: 8/93b lim: 40 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 CrossOver- 00:07:18.644 [2024-12-16 12:27:24.161048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffff0800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.644 [2024-12-16 12:27:24.161077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.644 #36 NEW cov: 12372 ft: 14347 corp: 9/105b lim: 40 exec/s: 0 rss: 72Mb L: 12/24 MS: 1 ChangeBinInt- 00:07:18.903 [2024-12-16 12:27:24.211932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.903 [2024-12-16 12:27:24.211960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.903 [2024-12-16 12:27:24.212093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.903 [2024-12-16 12:27:24.212112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.903 [2024-12-16 12:27:24.212236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ff303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.903 [2024-12-16 12:27:24.212253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.903 [2024-12-16 12:27:24.212377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:3030feff cdw11:ff0cff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.903 [2024-12-16 12:27:24.212393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.903 #37 NEW cov: 12372 ft: 14703 corp: 10/139b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 CrossOver- 00:07:18.903 [2024-12-16 12:27:24.271577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.903 [2024-12-16 12:27:24.271604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.903 [2024-12-16 12:27:24.271765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:303030fe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.903 [2024-12-16 12:27:24.271782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.903 #38 NEW cov: 12372 ft: 14939 corp: 11/155b lim: 40 exec/s: 0 rss: 73Mb L: 16/34 MS: 1 CrossOver- 00:07:18.903 [2024-12-16 12:27:24.311659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.903 [2024-12-16 12:27:24.311686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.903 [2024-12-16 12:27:24.311814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffff0c0a cdw11:01020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.903 [2024-12-16 12:27:24.311830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.903 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:18.903 #39 NEW cov: 12395 ft: 14957 corp: 12/171b lim: 40 exec/s: 0 rss: 73Mb L: 16/34 MS: 1 PersAutoDict- DE: "\001\002\000\000"- 00:07:18.903 [2024-12-16 12:27:24.351449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.903 [2024-12-16 12:27:24.351478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.903 #40 NEW cov: 12395 ft: 14966 corp: 13/183b lim: 40 exec/s: 0 rss: 73Mb L: 12/34 MS: 1 ChangeBinInt- 00:07:18.903 [2024-12-16 12:27:24.401638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.903 [2024-12-16 12:27:24.401669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.903 #41 NEW cov: 12395 ft: 15024 corp: 14/193b lim: 40 exec/s: 0 rss: 73Mb L: 10/34 MS: 1 ShuffleBytes- 00:07:18.903 [2024-12-16 12:27:24.451789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.903 [2024-12-16 12:27:24.451816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.162 #42 NEW cov: 12395 ft: 15042 corp: 15/205b lim: 40 exec/s: 42 rss: 73Mb L: 12/34 MS: 1 ChangeBinInt- 00:07:19.162 [2024-12-16 12:27:24.512019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.162 [2024-12-16 12:27:24.512046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.162 #43 NEW cov: 12395 ft: 15123 corp: 16/217b lim: 40 exec/s: 43 rss: 73Mb L: 12/34 MS: 1 EraseBytes- 00:07:19.162 [2024-12-16 12:27:24.582178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.162 [2024-12-16 12:27:24.582204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.162 #44 NEW cov: 12395 ft: 15133 corp: 17/232b lim: 40 exec/s: 44 rss: 73Mb L: 15/34 MS: 1 InsertRepeatedBytes- 00:07:19.162 [2024-12-16 12:27:24.622251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:31303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.162 [2024-12-16 12:27:24.622278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.162 #45 NEW cov: 12395 ft: 15142 corp: 18/243b lim: 40 exec/s: 45 rss: 73Mb L: 11/34 MS: 1 InsertByte- 00:07:19.162 [2024-12-16 12:27:24.692628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.162 [2024-12-16 12:27:24.692655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.162 [2024-12-16 12:27:24.692778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.162 [2024-12-16 12:27:24.692795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.162 [2024-12-16 12:27:24.692920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.162 [2024-12-16 12:27:24.692937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.162 #46 NEW cov: 12395 ft: 15163 corp: 19/272b lim: 40 exec/s: 46 rss: 73Mb L: 29/34 MS: 1 InsertRepeatedBytes- 00:07:19.421 [2024-12-16 12:27:24.733160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.421 [2024-12-16 12:27:24.733189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.421 [2024-12-16 12:27:24.733316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:30ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.421 [2024-12-16 12:27:24.733335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.421 [2024-12-16 12:27:24.733454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.421 [2024-12-16 12:27:24.733471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.421 #47 NEW cov: 12395 ft: 15182 corp: 20/303b lim: 40 exec/s: 47 rss: 73Mb L: 31/34 MS: 1 InsertRepeatedBytes- 00:07:19.421 [2024-12-16 12:27:24.773023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.421 [2024-12-16 12:27:24.773049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.421 [2024-12-16 12:27:24.773178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.421 [2024-12-16 12:27:24.773194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.421 #48 NEW cov: 12395 ft: 15223 corp: 21/326b lim: 40 exec/s: 48 rss: 73Mb L: 23/34 MS: 1 InsertRepeatedBytes- 00:07:19.421 [2024-12-16 12:27:24.813117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.421 [2024-12-16 12:27:24.813143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.422 [2024-12-16 12:27:24.813270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:30fe3030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.422 [2024-12-16 12:27:24.813286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.422 #49 NEW cov: 12395 ft: 15251 corp: 22/343b lim: 40 exec/s: 49 rss: 73Mb L: 17/34 MS: 1 CopyPart- 00:07:19.422 [2024-12-16 12:27:24.863321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:3030ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.422 [2024-12-16 12:27:24.863347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.422 [2024-12-16 12:27:24.863465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.422 [2024-12-16 12:27:24.863480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.422 #50 NEW cov: 12395 ft: 15257 corp: 23/364b lim: 40 exec/s: 50 rss: 73Mb L: 21/34 MS: 1 InsertRepeatedBytes- 00:07:19.422 [2024-12-16 12:27:24.913443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:25303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.422 [2024-12-16 12:27:24.913469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.422 [2024-12-16 12:27:24.913615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:303030fe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.422 [2024-12-16 12:27:24.913632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.422 #51 NEW cov: 12395 ft: 15275 corp: 24/380b lim: 40 exec/s: 51 rss: 73Mb L: 16/34 MS: 1 ChangeByte- 00:07:19.422 [2024-12-16 12:27:24.963842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0102 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.422 [2024-12-16 12:27:24.963873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.422 [2024-12-16 12:27:24.963997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.422 [2024-12-16 12:27:24.964014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.422 [2024-12-16 12:27:24.964136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.422 [2024-12-16 12:27:24.964155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.683 #52 NEW cov: 12395 ft: 15287 corp: 25/407b lim: 40 exec/s: 52 rss: 73Mb L: 27/34 MS: 1 PersAutoDict- DE: "\001\002\000\000"- 00:07:19.683 [2024-12-16 12:27:25.023808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:25303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.683 [2024-12-16 12:27:25.023836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.683 [2024-12-16 12:27:25.023975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:31303030 cdw11:303030fe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.683 [2024-12-16 12:27:25.023993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.683 #53 NEW cov: 12395 ft: 15309 corp: 26/423b lim: 40 exec/s: 53 rss: 73Mb L: 16/34 MS: 1 ChangeBit- 00:07:19.683 [2024-12-16 12:27:25.093749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.683 [2024-12-16 12:27:25.093778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.683 #54 NEW cov: 12395 ft: 15331 corp: 27/433b lim: 40 exec/s: 54 rss: 73Mb L: 10/34 MS: 1 EraseBytes- 00:07:19.683 [2024-12-16 12:27:25.144148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:25303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.683 [2024-12-16 12:27:25.144177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.683 [2024-12-16 12:27:25.144299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:303030fe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.683 [2024-12-16 12:27:25.144317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.683 #55 NEW cov: 12395 ft: 15425 corp: 28/449b lim: 40 exec/s: 55 rss: 73Mb L: 16/34 MS: 1 ShuffleBytes- 00:07:19.683 [2024-12-16 12:27:25.194652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:3030ffff cdw11:ffffff1d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.683 [2024-12-16 12:27:25.194687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.683 [2024-12-16 12:27:25.194812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1d1dffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.683 [2024-12-16 12:27:25.194828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.683 [2024-12-16 12:27:25.194974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:303030fe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.683 [2024-12-16 12:27:25.194990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.683 #56 NEW cov: 12395 ft: 15434 corp: 29/473b lim: 40 exec/s: 56 rss: 73Mb L: 24/34 MS: 1 InsertRepeatedBytes- 00:07:19.944 [2024-12-16 12:27:25.254128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:32303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.944 [2024-12-16 12:27:25.254155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.944 #57 NEW cov: 12395 ft: 15442 corp: 30/483b lim: 40 exec/s: 57 rss: 73Mb L: 10/34 MS: 1 ChangeBit- 00:07:19.944 [2024-12-16 12:27:25.304339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:25303030 cdw11:30303031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.944 [2024-12-16 12:27:25.304366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.944 #58 NEW cov: 12395 ft: 15466 corp: 31/498b lim: 40 exec/s: 58 rss: 73Mb L: 15/34 MS: 1 EraseBytes- 00:07:19.944 [2024-12-16 12:27:25.364826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:25303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.944 [2024-12-16 12:27:25.364852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.944 [2024-12-16 12:27:25.364987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:30303038 cdw11:303030fe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.944 [2024-12-16 12:27:25.365003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.944 #59 NEW cov: 12395 ft: 15479 corp: 32/514b lim: 40 exec/s: 59 rss: 73Mb L: 16/34 MS: 1 ChangeBit- 00:07:19.944 [2024-12-16 12:27:25.425212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:30fe3030 cdw11:20404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.944 [2024-12-16 12:27:25.425238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.944 [2024-12-16 12:27:25.425374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:40404040 cdw11:40404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.944 [2024-12-16 12:27:25.425391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.944 [2024-12-16 12:27:25.425512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:40404040 cdw11:404030fe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.944 [2024-12-16 12:27:25.425527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.944 #64 pulse cov: 12395 ft: 15491 corp: 32/514b lim: 40 exec/s: 32 rss: 74Mb 00:07:19.944 #64 NEW cov: 12395 ft: 15491 corp: 33/538b lim: 40 exec/s: 32 rss: 74Mb L: 24/34 MS: 5 EraseBytes-ChangeASCIIInt-InsertByte-ChangeBit-InsertRepeatedBytes- 00:07:19.944 #64 DONE cov: 12395 ft: 15491 corp: 33/538b lim: 40 exec/s: 32 rss: 74Mb 00:07:19.944 ###### Recommended dictionary. ###### 00:07:19.944 "\001\002\000\000" # Uses: 2 00:07:19.944 ###### End of recommended dictionary. ###### 00:07:19.944 Done 64 runs in 2 second(s) 00:07:20.203 12:27:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:07:20.203 12:27:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:20.203 12:27:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:20.203 12:27:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:20.203 12:27:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:20.203 12:27:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:20.203 12:27:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:20.203 12:27:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:20.203 12:27:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:20.203 12:27:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:20.203 12:27:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:20.203 12:27:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:07:20.203 12:27:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:07:20.203 12:27:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:20.203 12:27:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:20.203 12:27:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:20.203 12:27:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:20.203 12:27:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:20.203 12:27:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:07:20.203 [2024-12-16 12:27:25.603764] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:20.203 [2024-12-16 12:27:25.603816] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid989793 ] 00:07:20.463 [2024-12-16 12:27:25.785865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.463 [2024-12-16 12:27:25.819330] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.463 [2024-12-16 12:27:25.878216] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.463 [2024-12-16 12:27:25.894498] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:20.463 INFO: Running with entropic power schedule (0xFF, 100). 00:07:20.463 INFO: Seed: 4213528406 00:07:20.463 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:20.463 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:20.463 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:20.463 INFO: A corpus is not provided, starting from an empty corpus 00:07:20.463 #2 INITED exec/s: 0 rss: 67Mb 00:07:20.463 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:20.463 This may also happen if the target rejected all inputs we tried so far 00:07:20.463 [2024-12-16 12:27:25.943167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.463 [2024-12-16 12:27:25.943194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.722 NEW_FUNC[1/716]: 0x44e158 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:20.722 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:20.722 #3 NEW cov: 12155 ft: 12143 corp: 2/9b lim: 40 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:20.722 [2024-12-16 12:27:26.253987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.722 [2024-12-16 12:27:26.254026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.722 #11 NEW cov: 12269 ft: 12746 corp: 3/20b lim: 40 exec/s: 0 rss: 74Mb L: 11/11 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:07:20.981 [2024-12-16 12:27:26.294160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.981 [2024-12-16 12:27:26.294187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.981 [2024-12-16 12:27:26.294245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:8a8a8a8a cdw11:8a8a8a8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.981 [2024-12-16 12:27:26.294259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.981 #17 NEW cov: 12275 ft: 13265 corp: 4/40b lim: 40 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:07:20.981 [2024-12-16 12:27:26.354451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.981 [2024-12-16 12:27:26.354477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.981 [2024-12-16 12:27:26.354535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.981 [2024-12-16 12:27:26.354549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.981 [2024-12-16 12:27:26.354606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffff8a cdw11:8a8a8a8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.981 [2024-12-16 12:27:26.354624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.981 #18 NEW cov: 12360 ft: 13736 corp: 5/71b lim: 40 exec/s: 0 rss: 74Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:07:20.981 [2024-12-16 12:27:26.414470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.981 [2024-12-16 12:27:26.414496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.981 [2024-12-16 12:27:26.414556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.981 [2024-12-16 12:27:26.414570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.981 #19 NEW cov: 12360 ft: 13830 corp: 6/89b lim: 40 exec/s: 0 rss: 74Mb L: 18/31 MS: 1 InsertRepeatedBytes- 00:07:20.981 [2024-12-16 12:27:26.454463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffff98ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.981 [2024-12-16 12:27:26.454489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.981 #22 NEW cov: 12360 ft: 13872 corp: 7/98b lim: 40 exec/s: 0 rss: 75Mb L: 9/31 MS: 3 EraseBytes-ChangeByte-CopyPart- 00:07:20.981 [2024-12-16 12:27:26.494700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03030903 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.981 [2024-12-16 12:27:26.494725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.981 [2024-12-16 12:27:26.494783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.981 [2024-12-16 12:27:26.494797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.981 #23 NEW cov: 12360 ft: 13940 corp: 8/116b lim: 40 exec/s: 0 rss: 75Mb L: 18/31 MS: 1 ChangeBinInt- 00:07:21.240 [2024-12-16 12:27:26.554865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.240 [2024-12-16 12:27:26.554891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.240 [2024-12-16 12:27:26.554949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:8a8a8a8a cdw11:8a8a8a8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.240 [2024-12-16 12:27:26.554962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.240 #24 NEW cov: 12360 ft: 13955 corp: 9/136b lim: 40 exec/s: 0 rss: 75Mb L: 20/31 MS: 1 ChangeBit- 00:07:21.240 [2024-12-16 12:27:26.594945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03030903 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.240 [2024-12-16 12:27:26.594972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.240 [2024-12-16 12:27:26.595026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.240 [2024-12-16 12:27:26.595040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.240 #25 NEW cov: 12360 ft: 13988 corp: 10/155b lim: 40 exec/s: 0 rss: 75Mb L: 19/31 MS: 1 InsertByte- 00:07:21.240 [2024-12-16 12:27:26.655203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03030903 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.240 [2024-12-16 12:27:26.655228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.240 #26 NEW cov: 12360 ft: 14031 corp: 11/168b lim: 40 exec/s: 0 rss: 75Mb L: 13/31 MS: 1 EraseBytes- 00:07:21.240 [2024-12-16 12:27:26.695279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.241 [2024-12-16 12:27:26.695304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.241 #32 NEW cov: 12360 ft: 14073 corp: 12/177b lim: 40 exec/s: 0 rss: 75Mb L: 9/31 MS: 1 EraseBytes- 00:07:21.241 [2024-12-16 12:27:26.755723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.241 [2024-12-16 12:27:26.755748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.241 [2024-12-16 12:27:26.755806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:12121212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.241 [2024-12-16 12:27:26.755819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.241 [2024-12-16 12:27:26.755876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:121212ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.241 [2024-12-16 12:27:26.755889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.241 #33 NEW cov: 12360 ft: 14078 corp: 13/203b lim: 40 exec/s: 0 rss: 75Mb L: 26/31 MS: 1 InsertRepeatedBytes- 00:07:21.241 [2024-12-16 12:27:26.795828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.241 [2024-12-16 12:27:26.795853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.241 [2024-12-16 12:27:26.795912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.241 [2024-12-16 12:27:26.795929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.241 [2024-12-16 12:27:26.795985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffff8a cdw11:8a8a8a88 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.241 [2024-12-16 12:27:26.795998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.500 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:21.500 #39 NEW cov: 12383 ft: 14136 corp: 14/234b lim: 40 exec/s: 0 rss: 75Mb L: 31/31 MS: 1 ChangeBit- 00:07:21.500 [2024-12-16 12:27:26.855898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.500 [2024-12-16 12:27:26.855924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.500 [2024-12-16 12:27:26.855982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:8a8a8a8a cdw11:8a8a8a8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.500 [2024-12-16 12:27:26.855996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.500 #40 NEW cov: 12383 ft: 14180 corp: 15/254b lim: 40 exec/s: 0 rss: 75Mb L: 20/31 MS: 1 ChangeByte- 00:07:21.500 [2024-12-16 12:27:26.915948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffff98ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.500 [2024-12-16 12:27:26.915973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.500 #41 NEW cov: 12383 ft: 14249 corp: 16/263b lim: 40 exec/s: 41 rss: 75Mb L: 9/31 MS: 1 ShuffleBytes- 00:07:21.500 [2024-12-16 12:27:26.976207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03030903 cdw11:ff000303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.500 [2024-12-16 12:27:26.976232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.500 [2024-12-16 12:27:26.976289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.500 [2024-12-16 12:27:26.976303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.500 #42 NEW cov: 12383 ft: 14259 corp: 17/281b lim: 40 exec/s: 42 rss: 75Mb L: 18/31 MS: 1 CMP- DE: "\377\000"- 00:07:21.500 [2024-12-16 12:27:27.016312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03030903 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.500 [2024-12-16 12:27:27.016337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.500 [2024-12-16 12:27:27.016392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.500 [2024-12-16 12:27:27.016406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.500 #43 NEW cov: 12383 ft: 14266 corp: 18/299b lim: 40 exec/s: 43 rss: 75Mb L: 18/31 MS: 1 EraseBytes- 00:07:21.759 [2024-12-16 12:27:27.076590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03030903 cdw11:ffcdcdcd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.759 [2024-12-16 12:27:27.076620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.759 [2024-12-16 12:27:27.076683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cdcdcdcd cdw11:cdcdcdcd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.759 [2024-12-16 12:27:27.076697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.759 [2024-12-16 12:27:27.076752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:cd000303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.759 [2024-12-16 12:27:27.076766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.759 #44 NEW cov: 12383 ft: 14347 corp: 19/329b lim: 40 exec/s: 44 rss: 75Mb L: 30/31 MS: 1 InsertRepeatedBytes- 00:07:21.759 [2024-12-16 12:27:27.136546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.759 [2024-12-16 12:27:27.136571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.759 #45 NEW cov: 12383 ft: 14375 corp: 20/340b lim: 40 exec/s: 45 rss: 76Mb L: 11/31 MS: 1 ShuffleBytes- 00:07:21.759 [2024-12-16 12:27:27.176757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03030952 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.759 [2024-12-16 12:27:27.176783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.759 [2024-12-16 12:27:27.176841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.759 [2024-12-16 12:27:27.176855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.759 #46 NEW cov: 12383 ft: 14455 corp: 21/358b lim: 40 exec/s: 46 rss: 76Mb L: 18/31 MS: 1 ChangeByte- 00:07:21.759 [2024-12-16 12:27:27.216987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.759 [2024-12-16 12:27:27.217013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.759 [2024-12-16 12:27:27.217073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff8a8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.759 [2024-12-16 12:27:27.217087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.759 [2024-12-16 12:27:27.217145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a828a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.759 [2024-12-16 12:27:27.217159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.759 #47 NEW cov: 12383 ft: 14470 corp: 22/384b lim: 40 exec/s: 47 rss: 76Mb L: 26/31 MS: 1 CopyPart- 00:07:21.759 [2024-12-16 12:27:27.277140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.760 [2024-12-16 12:27:27.277165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.760 [2024-12-16 12:27:27.277225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff001212 cdw11:12121212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.760 [2024-12-16 12:27:27.277239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.760 [2024-12-16 12:27:27.277297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:12121212 cdw11:121212ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.760 [2024-12-16 12:27:27.277314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.760 #48 NEW cov: 12383 ft: 14480 corp: 23/410b lim: 40 exec/s: 48 rss: 76Mb L: 26/31 MS: 1 PersAutoDict- DE: "\377\000"- 00:07:22.019 [2024-12-16 12:27:27.337312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03030903 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.019 [2024-12-16 12:27:27.337337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.019 [2024-12-16 12:27:27.337396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.019 [2024-12-16 12:27:27.337410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.019 [2024-12-16 12:27:27.337467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:150a0303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.019 [2024-12-16 12:27:27.337481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.019 #54 NEW cov: 12383 ft: 14488 corp: 24/440b lim: 40 exec/s: 54 rss: 76Mb L: 30/31 MS: 1 CopyPart- 00:07:22.019 [2024-12-16 12:27:27.397473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.019 [2024-12-16 12:27:27.397498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.019 [2024-12-16 12:27:27.397559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.019 [2024-12-16 12:27:27.397573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.019 [2024-12-16 12:27:27.397631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fff7ff8a cdw11:8a8a8a8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.019 [2024-12-16 12:27:27.397645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.019 #55 NEW cov: 12383 ft: 14569 corp: 25/471b lim: 40 exec/s: 55 rss: 76Mb L: 31/31 MS: 1 ChangeBit- 00:07:22.019 [2024-12-16 12:27:27.437600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.019 [2024-12-16 12:27:27.437630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.019 [2024-12-16 12:27:27.437687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:47000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.019 [2024-12-16 12:27:27.437701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.019 [2024-12-16 12:27:27.437758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a828a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.019 [2024-12-16 12:27:27.437772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.019 #56 NEW cov: 12383 ft: 14588 corp: 26/497b lim: 40 exec/s: 56 rss: 76Mb L: 26/31 MS: 1 CMP- DE: "G\000\000\000\000\000\000\000"- 00:07:22.019 [2024-12-16 12:27:27.497735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.019 [2024-12-16 12:27:27.497760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.019 [2024-12-16 12:27:27.497822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.019 [2024-12-16 12:27:27.497837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.019 [2024-12-16 12:27:27.497895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:03030303 cdw11:03ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.019 [2024-12-16 12:27:27.497909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.019 #57 NEW cov: 12383 ft: 14597 corp: 27/521b lim: 40 exec/s: 57 rss: 76Mb L: 24/31 MS: 1 CrossOver- 00:07:22.019 [2024-12-16 12:27:27.557793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03030903 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.019 [2024-12-16 12:27:27.557819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.019 [2024-12-16 12:27:27.557877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.020 [2024-12-16 12:27:27.557891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.020 #58 NEW cov: 12383 ft: 14619 corp: 28/540b lim: 40 exec/s: 58 rss: 76Mb L: 19/31 MS: 1 CopyPart- 00:07:22.278 [2024-12-16 12:27:27.597787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:3affffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.278 [2024-12-16 12:27:27.597812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.278 #59 NEW cov: 12383 ft: 14635 corp: 29/552b lim: 40 exec/s: 59 rss: 76Mb L: 12/31 MS: 1 InsertByte- 00:07:22.278 [2024-12-16 12:27:27.637943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffff27 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.278 [2024-12-16 12:27:27.637968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.278 #60 NEW cov: 12383 ft: 14646 corp: 30/561b lim: 40 exec/s: 60 rss: 76Mb L: 9/31 MS: 1 ChangeByte- 00:07:22.278 [2024-12-16 12:27:27.698083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.278 [2024-12-16 12:27:27.698108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.279 #61 NEW cov: 12383 ft: 14677 corp: 31/569b lim: 40 exec/s: 61 rss: 76Mb L: 8/31 MS: 1 EraseBytes- 00:07:22.279 [2024-12-16 12:27:27.738316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.279 [2024-12-16 12:27:27.738343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.279 [2024-12-16 12:27:27.738400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.279 [2024-12-16 12:27:27.738415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.279 #62 NEW cov: 12383 ft: 14743 corp: 32/587b lim: 40 exec/s: 62 rss: 76Mb L: 18/31 MS: 1 CopyPart- 00:07:22.279 [2024-12-16 12:27:27.778614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.279 [2024-12-16 12:27:27.778640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.279 [2024-12-16 12:27:27.778703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.279 [2024-12-16 12:27:27.778716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.279 [2024-12-16 12:27:27.778774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffff8a cdw11:8a8a8a8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.279 [2024-12-16 12:27:27.778788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.279 #63 NEW cov: 12383 ft: 14744 corp: 33/618b lim: 40 exec/s: 63 rss: 76Mb L: 31/31 MS: 1 ChangeBit- 00:07:22.279 [2024-12-16 12:27:27.818576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:03030903 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.279 [2024-12-16 12:27:27.818601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.279 [2024-12-16 12:27:27.818669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.279 [2024-12-16 12:27:27.818683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.279 #64 NEW cov: 12383 ft: 14753 corp: 34/635b lim: 40 exec/s: 64 rss: 76Mb L: 17/31 MS: 1 EraseBytes- 00:07:22.538 [2024-12-16 12:27:27.858671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffff04 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.538 [2024-12-16 12:27:27.858696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.538 [2024-12-16 12:27:27.858755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5fb02f25 cdw11:20f8ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.538 [2024-12-16 12:27:27.858769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.538 #65 NEW cov: 12383 ft: 14764 corp: 35/652b lim: 40 exec/s: 65 rss: 76Mb L: 17/31 MS: 1 CMP- DE: "\377\004_\260/% \370"- 00:07:22.538 [2024-12-16 12:27:27.899062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.539 [2024-12-16 12:27:27.899087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.539 [2024-12-16 12:27:27.899145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:03030303 cdw11:03030303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.539 [2024-12-16 12:27:27.899159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.539 [2024-12-16 12:27:27.899217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:03030347 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.539 [2024-12-16 12:27:27.899232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.539 [2024-12-16 12:27:27.899289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:03ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.539 [2024-12-16 12:27:27.899302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.539 #66 NEW cov: 12383 ft: 15232 corp: 36/684b lim: 40 exec/s: 33 rss: 76Mb L: 32/32 MS: 1 PersAutoDict- DE: "G\000\000\000\000\000\000\000"- 00:07:22.539 #66 DONE cov: 12383 ft: 15232 corp: 36/684b lim: 40 exec/s: 33 rss: 76Mb 00:07:22.539 ###### Recommended dictionary. ###### 00:07:22.539 "\377\000" # Uses: 1 00:07:22.539 "G\000\000\000\000\000\000\000" # Uses: 1 00:07:22.539 "\377\004_\260/% \370" # Uses: 0 00:07:22.539 ###### End of recommended dictionary. ###### 00:07:22.539 Done 66 runs in 2 second(s) 00:07:22.539 12:27:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:22.539 12:27:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:22.539 12:27:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:22.539 12:27:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:22.539 12:27:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:22.539 12:27:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:22.539 12:27:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:22.539 12:27:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:22.539 12:27:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:22.539 12:27:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:22.539 12:27:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:22.539 12:27:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:22.539 12:27:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:22.539 12:27:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:22.539 12:27:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:22.539 12:27:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:22.539 12:27:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:22.539 12:27:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:22.539 12:27:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:22.539 [2024-12-16 12:27:28.090046] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:22.539 [2024-12-16 12:27:28.090113] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid990224 ] 00:07:22.798 [2024-12-16 12:27:28.278810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.798 [2024-12-16 12:27:28.312093] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.057 [2024-12-16 12:27:28.370895] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.057 [2024-12-16 12:27:28.387206] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:23.057 INFO: Running with entropic power schedule (0xFF, 100). 00:07:23.057 INFO: Seed: 2411562737 00:07:23.057 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:23.057 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:23.057 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:23.057 INFO: A corpus is not provided, starting from an empty corpus 00:07:23.057 #2 INITED exec/s: 0 rss: 65Mb 00:07:23.057 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:23.057 This may also happen if the target rejected all inputs we tried so far 00:07:23.057 [2024-12-16 12:27:28.436062] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:23.057 [2024-12-16 12:27:28.436502] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.057 [2024-12-16 12:27:28.436533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.057 [2024-12-16 12:27:28.436589] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.057 [2024-12-16 12:27:28.436604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.057 [2024-12-16 12:27:28.436666] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.057 [2024-12-16 12:27:28.436680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.316 NEW_FUNC[1/718]: 0x44fd28 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:23.316 NEW_FUNC[2/718]: 0x477068 in feat_rsv_persistence /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:386 00:07:23.316 #7 NEW cov: 12190 ft: 12189 corp: 2/23b lim: 35 exec/s: 0 rss: 72Mb L: 22/22 MS: 5 CrossOver-InsertByte-ChangeBit-ChangeByte-InsertRepeatedBytes- 00:07:23.316 [2024-12-16 12:27:28.756918] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:23.316 [2024-12-16 12:27:28.757494] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.316 [2024-12-16 12:27:28.757525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.316 [2024-12-16 12:27:28.757587] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.316 [2024-12-16 12:27:28.757601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.316 [2024-12-16 12:27:28.757666] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.316 [2024-12-16 12:27:28.757680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.316 [2024-12-16 12:27:28.757739] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.316 [2024-12-16 12:27:28.757752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.316 NEW_FUNC[1/1]: 0x19650f8 in nvme_qpair_check_enabled /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:636 00:07:23.316 #8 NEW cov: 12307 ft: 12958 corp: 3/53b lim: 35 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\004"- 00:07:23.316 [2024-12-16 12:27:28.816948] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:23.316 [2024-12-16 12:27:28.817389] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.316 [2024-12-16 12:27:28.817416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.317 [2024-12-16 12:27:28.817473] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.317 [2024-12-16 12:27:28.817487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.317 [2024-12-16 12:27:28.817543] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.317 [2024-12-16 12:27:28.817561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.317 #14 NEW cov: 12313 ft: 13290 corp: 4/75b lim: 35 exec/s: 0 rss: 72Mb L: 22/30 MS: 1 ChangeBit- 00:07:23.317 [2024-12-16 12:27:28.857076] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:23.317 [2024-12-16 12:27:28.857624] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.317 [2024-12-16 12:27:28.857653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.317 [2024-12-16 12:27:28.857712] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.317 [2024-12-16 12:27:28.857726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.317 [2024-12-16 12:27:28.857781] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.317 [2024-12-16 12:27:28.857795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.317 [2024-12-16 12:27:28.857849] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.317 [2024-12-16 12:27:28.857863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.576 #15 NEW cov: 12398 ft: 13547 corp: 5/105b lim: 35 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ChangeBit- 00:07:23.576 [2024-12-16 12:27:28.917173] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:23.576 [2024-12-16 12:27:28.917608] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.576 [2024-12-16 12:27:28.917639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.576 [2024-12-16 12:27:28.917698] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.576 [2024-12-16 12:27:28.917712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.576 [2024-12-16 12:27:28.917771] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.576 [2024-12-16 12:27:28.917785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.576 #16 NEW cov: 12398 ft: 13698 corp: 6/129b lim: 35 exec/s: 0 rss: 72Mb L: 24/30 MS: 1 CopyPart- 00:07:23.576 [2024-12-16 12:27:28.957937] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:80000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.576 [2024-12-16 12:27:28.957964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.576 [2024-12-16 12:27:28.958022] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.576 [2024-12-16 12:27:28.958037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.576 [2024-12-16 12:27:28.958094] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.576 [2024-12-16 12:27:28.958108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.576 [2024-12-16 12:27:28.958164] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.576 [2024-12-16 12:27:28.958180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.576 #17 NEW cov: 12405 ft: 13829 corp: 7/163b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:23.576 [2024-12-16 12:27:29.017500] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:23.576 [2024-12-16 12:27:29.017962] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.576 [2024-12-16 12:27:29.017988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.576 [2024-12-16 12:27:29.018048] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.576 [2024-12-16 12:27:29.018063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.576 [2024-12-16 12:27:29.018120] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.576 [2024-12-16 12:27:29.018134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.576 #18 NEW cov: 12405 ft: 13885 corp: 8/187b lim: 35 exec/s: 0 rss: 72Mb L: 24/34 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\004"- 00:07:23.576 [2024-12-16 12:27:29.077651] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:23.576 [2024-12-16 12:27:29.078097] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.576 [2024-12-16 12:27:29.078123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.576 [2024-12-16 12:27:29.078182] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000078 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.576 [2024-12-16 12:27:29.078196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.576 [2024-12-16 12:27:29.078249] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.576 [2024-12-16 12:27:29.078263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.576 #19 NEW cov: 12405 ft: 13902 corp: 9/211b lim: 35 exec/s: 0 rss: 72Mb L: 24/34 MS: 1 ChangeBit- 00:07:23.576 [2024-12-16 12:27:29.117749] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:23.576 [2024-12-16 12:27:29.118192] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.576 [2024-12-16 12:27:29.118216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.576 [2024-12-16 12:27:29.118273] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.576 [2024-12-16 12:27:29.118287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.576 [2024-12-16 12:27:29.118344] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.576 [2024-12-16 12:27:29.118358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.836 #20 NEW cov: 12405 ft: 13937 corp: 10/233b lim: 35 exec/s: 0 rss: 72Mb L: 22/34 MS: 1 ChangeBit- 00:07:23.836 [2024-12-16 12:27:29.177921] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:23.836 [2024-12-16 12:27:29.178386] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.836 [2024-12-16 12:27:29.178412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.836 [2024-12-16 12:27:29.178472] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000078 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.836 [2024-12-16 12:27:29.178486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.836 [2024-12-16 12:27:29.178539] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.836 [2024-12-16 12:27:29.178554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.836 #21 NEW cov: 12405 ft: 13978 corp: 11/258b lim: 35 exec/s: 0 rss: 73Mb L: 25/34 MS: 1 InsertByte- 00:07:23.836 [2024-12-16 12:27:29.238175] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:23.836 [2024-12-16 12:27:29.238868] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.836 [2024-12-16 12:27:29.238896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.836 [2024-12-16 12:27:29.238955] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.836 [2024-12-16 12:27:29.238969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.836 [2024-12-16 12:27:29.239026] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.836 [2024-12-16 12:27:29.239041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.836 [2024-12-16 12:27:29.239095] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.836 [2024-12-16 12:27:29.239110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.836 [2024-12-16 12:27:29.239166] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.836 [2024-12-16 12:27:29.239182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:23.836 #22 NEW cov: 12405 ft: 14065 corp: 12/293b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:23.836 [2024-12-16 12:27:29.278189] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:23.836 [2024-12-16 12:27:29.278646] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.836 [2024-12-16 12:27:29.278673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.836 [2024-12-16 12:27:29.278729] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000078 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.836 [2024-12-16 12:27:29.278743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.836 [2024-12-16 12:27:29.278800] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.836 [2024-12-16 12:27:29.278818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.836 #23 NEW cov: 12405 ft: 14129 corp: 13/317b lim: 35 exec/s: 0 rss: 73Mb L: 24/35 MS: 1 ChangeBit- 00:07:23.836 [2024-12-16 12:27:29.318293] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:23.836 [2024-12-16 12:27:29.318774] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.836 [2024-12-16 12:27:29.318800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.836 [2024-12-16 12:27:29.318856] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.836 [2024-12-16 12:27:29.318871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.836 [2024-12-16 12:27:29.318927] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.836 [2024-12-16 12:27:29.318940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.836 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:23.836 #24 NEW cov: 12428 ft: 14210 corp: 14/339b lim: 35 exec/s: 0 rss: 73Mb L: 22/35 MS: 1 ShuffleBytes- 00:07:23.836 [2024-12-16 12:27:29.359214] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.836 [2024-12-16 12:27:29.359241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.836 [2024-12-16 12:27:29.359298] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.836 [2024-12-16 12:27:29.359312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.836 [2024-12-16 12:27:29.359368] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.836 [2024-12-16 12:27:29.359381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.837 [2024-12-16 12:27:29.359438] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.837 [2024-12-16 12:27:29.359452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.837 [2024-12-16 12:27:29.359507] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.837 [2024-12-16 12:27:29.359523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.096 #25 NEW cov: 12428 ft: 14259 corp: 15/374b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:24.096 [2024-12-16 12:27:29.418633] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:24.096 [2024-12-16 12:27:29.419082] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.096 [2024-12-16 12:27:29.419108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.096 [2024-12-16 12:27:29.419166] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.096 [2024-12-16 12:27:29.419180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.096 [2024-12-16 12:27:29.419237] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.096 [2024-12-16 12:27:29.419251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.096 #26 NEW cov: 12428 ft: 14279 corp: 16/397b lim: 35 exec/s: 26 rss: 73Mb L: 23/35 MS: 1 EraseBytes- 00:07:24.096 [2024-12-16 12:27:29.478777] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:24.096 [2024-12-16 12:27:29.479247] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.096 [2024-12-16 12:27:29.479274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.096 [2024-12-16 12:27:29.479331] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.096 [2024-12-16 12:27:29.479346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.096 [2024-12-16 12:27:29.479401] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.096 [2024-12-16 12:27:29.479415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.096 #27 NEW cov: 12428 ft: 14322 corp: 17/420b lim: 35 exec/s: 27 rss: 73Mb L: 23/35 MS: 1 ChangeBit- 00:07:24.096 [2024-12-16 12:27:29.538997] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:24.096 [2024-12-16 12:27:29.539576] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.096 [2024-12-16 12:27:29.539603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.096 [2024-12-16 12:27:29.539665] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.096 [2024-12-16 12:27:29.539680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.096 [2024-12-16 12:27:29.539738] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.096 [2024-12-16 12:27:29.539752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.096 NEW_FUNC[1/3]: 0x46c9f8 in feat_temperature_threshold /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:295 00:07:24.096 NEW_FUNC[2/3]: 0x13867a8 in temp_threshold_opts_valid /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1644 00:07:24.096 #28 NEW cov: 12484 ft: 14381 corp: 18/448b lim: 35 exec/s: 28 rss: 73Mb L: 28/35 MS: 1 InsertRepeatedBytes- 00:07:24.096 [2024-12-16 12:27:29.579047] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:24.096 [2024-12-16 12:27:29.579624] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.096 [2024-12-16 12:27:29.579651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.096 [2024-12-16 12:27:29.579708] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.096 [2024-12-16 12:27:29.579723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.096 [2024-12-16 12:27:29.579785] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.096 [2024-12-16 12:27:29.579799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.096 #29 NEW cov: 12484 ft: 14386 corp: 19/476b lim: 35 exec/s: 29 rss: 73Mb L: 28/35 MS: 1 ChangeByte- 00:07:24.096 [2024-12-16 12:27:29.639237] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:24.096 [2024-12-16 12:27:29.639705] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.096 [2024-12-16 12:27:29.639732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.096 [2024-12-16 12:27:29.639789] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000078 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.096 [2024-12-16 12:27:29.639804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.096 [2024-12-16 12:27:29.639860] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.096 [2024-12-16 12:27:29.639874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.355 #30 NEW cov: 12484 ft: 14427 corp: 20/500b lim: 35 exec/s: 30 rss: 73Mb L: 24/35 MS: 1 ChangeBinInt- 00:07:24.355 [2024-12-16 12:27:29.699508] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:80000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.355 [2024-12-16 12:27:29.699537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.355 #31 NEW cov: 12484 ft: 15159 corp: 21/508b lim: 35 exec/s: 31 rss: 73Mb L: 8/35 MS: 1 CrossOver- 00:07:24.355 [2024-12-16 12:27:29.739460] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:24.355 [2024-12-16 12:27:29.739823] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.355 [2024-12-16 12:27:29.739848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.355 [2024-12-16 12:27:29.739906] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.355 [2024-12-16 12:27:29.739921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.355 #32 NEW cov: 12484 ft: 15346 corp: 22/525b lim: 35 exec/s: 32 rss: 73Mb L: 17/35 MS: 1 EraseBytes- 00:07:24.355 [2024-12-16 12:27:29.799624] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:24.355 [2024-12-16 12:27:29.799970] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.355 [2024-12-16 12:27:29.799996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.355 [2024-12-16 12:27:29.800054] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.355 [2024-12-16 12:27:29.800068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.355 #33 NEW cov: 12484 ft: 15347 corp: 23/545b lim: 35 exec/s: 33 rss: 73Mb L: 20/35 MS: 1 EraseBytes- 00:07:24.355 [2024-12-16 12:27:29.859819] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:24.356 [2024-12-16 12:27:29.860279] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.356 [2024-12-16 12:27:29.860306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.356 [2024-12-16 12:27:29.860365] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.356 [2024-12-16 12:27:29.860380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.356 [2024-12-16 12:27:29.860436] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.356 [2024-12-16 12:27:29.860449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.356 #34 NEW cov: 12484 ft: 15358 corp: 24/567b lim: 35 exec/s: 34 rss: 73Mb L: 22/35 MS: 1 ShuffleBytes- 00:07:24.356 [2024-12-16 12:27:29.900372] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:80000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.356 [2024-12-16 12:27:29.900399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.356 [2024-12-16 12:27:29.900460] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.356 [2024-12-16 12:27:29.900474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.356 [2024-12-16 12:27:29.900532] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.356 [2024-12-16 12:27:29.900546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.615 #35 NEW cov: 12484 ft: 15405 corp: 25/591b lim: 35 exec/s: 35 rss: 73Mb L: 24/35 MS: 1 ChangeByte- 00:07:24.615 [2024-12-16 12:27:29.940803] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.615 [2024-12-16 12:27:29.940829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.615 [2024-12-16 12:27:29.940886] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.615 [2024-12-16 12:27:29.940900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.615 [2024-12-16 12:27:29.940957] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.615 [2024-12-16 12:27:29.940971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.615 [2024-12-16 12:27:29.941027] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.615 [2024-12-16 12:27:29.941043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.615 [2024-12-16 12:27:29.941097] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.615 [2024-12-16 12:27:29.941113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.615 #36 NEW cov: 12484 ft: 15423 corp: 26/626b lim: 35 exec/s: 36 rss: 73Mb L: 35/35 MS: 1 CrossOver- 00:07:24.615 [2024-12-16 12:27:30.000222] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:24.615 [2024-12-16 12:27:30.000717] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.615 [2024-12-16 12:27:30.000749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.615 [2024-12-16 12:27:30.000815] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.615 [2024-12-16 12:27:30.000834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.615 [2024-12-16 12:27:30.000902] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.615 [2024-12-16 12:27:30.000924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.615 #37 NEW cov: 12484 ft: 15494 corp: 27/648b lim: 35 exec/s: 37 rss: 74Mb L: 22/35 MS: 1 ShuffleBytes- 00:07:24.615 [2024-12-16 12:27:30.061163] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.615 [2024-12-16 12:27:30.061193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.615 [2024-12-16 12:27:30.061257] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.615 [2024-12-16 12:27:30.061273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.615 [2024-12-16 12:27:30.061334] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.615 [2024-12-16 12:27:30.061350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.615 [2024-12-16 12:27:30.061428] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.615 [2024-12-16 12:27:30.061445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.615 [2024-12-16 12:27:30.061508] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.615 [2024-12-16 12:27:30.061528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.615 #38 NEW cov: 12484 ft: 15506 corp: 28/683b lim: 35 exec/s: 38 rss: 74Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:24.615 [2024-12-16 12:27:30.101295] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.615 [2024-12-16 12:27:30.101321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.615 [2024-12-16 12:27:30.101395] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.615 [2024-12-16 12:27:30.101409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.615 [2024-12-16 12:27:30.101468] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.615 [2024-12-16 12:27:30.101482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.615 [2024-12-16 12:27:30.101537] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.615 [2024-12-16 12:27:30.101553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.615 [2024-12-16 12:27:30.101608] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.615 [2024-12-16 12:27:30.101630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.615 #39 NEW cov: 12484 ft: 15563 corp: 29/718b lim: 35 exec/s: 39 rss: 74Mb L: 35/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\004"- 00:07:24.615 [2024-12-16 12:27:30.160703] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:24.615 [2024-12-16 12:27:30.161168] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.615 [2024-12-16 12:27:30.161195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.615 [2024-12-16 12:27:30.161255] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.616 [2024-12-16 12:27:30.161269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.616 [2024-12-16 12:27:30.161326] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.616 [2024-12-16 12:27:30.161340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.875 #40 NEW cov: 12484 ft: 15603 corp: 30/741b lim: 35 exec/s: 40 rss: 74Mb L: 23/35 MS: 1 InsertByte- 00:07:24.875 [2024-12-16 12:27:30.201556] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000a4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.875 [2024-12-16 12:27:30.201585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.875 [2024-12-16 12:27:30.201647] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.875 [2024-12-16 12:27:30.201661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.875 [2024-12-16 12:27:30.201800] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.875 [2024-12-16 12:27:30.201814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.875 NEW_FUNC[1/2]: 0x46a708 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:07:24.875 NEW_FUNC[2/2]: 0x138a5c8 in nvmf_ctrlr_set_features_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1607 00:07:24.875 #41 NEW cov: 12541 ft: 15667 corp: 31/776b lim: 35 exec/s: 41 rss: 74Mb L: 35/35 MS: 1 InsertByte- 00:07:24.875 [2024-12-16 12:27:30.261417] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000008b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.875 [2024-12-16 12:27:30.261444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.875 [2024-12-16 12:27:30.261505] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.875 [2024-12-16 12:27:30.261520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.875 [2024-12-16 12:27:30.261579] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.875 [2024-12-16 12:27:30.261595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.875 #42 NEW cov: 12541 ft: 15685 corp: 32/800b lim: 35 exec/s: 42 rss: 74Mb L: 24/35 MS: 1 EraseBytes- 00:07:24.875 [2024-12-16 12:27:30.321214] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:24.875 [2024-12-16 12:27:30.321820] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.875 [2024-12-16 12:27:30.321847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.875 [2024-12-16 12:27:30.321906] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.875 [2024-12-16 12:27:30.321919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.875 [2024-12-16 12:27:30.321976] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.875 [2024-12-16 12:27:30.321990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.875 #43 NEW cov: 12541 ft: 15697 corp: 33/828b lim: 35 exec/s: 43 rss: 74Mb L: 28/35 MS: 1 ChangeByte- 00:07:24.875 [2024-12-16 12:27:30.361254] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:24.875 [2024-12-16 12:27:30.361588] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.876 [2024-12-16 12:27:30.361618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.876 [2024-12-16 12:27:30.361681] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.876 [2024-12-16 12:27:30.361696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.876 #44 NEW cov: 12541 ft: 15722 corp: 34/844b lim: 35 exec/s: 44 rss: 74Mb L: 16/35 MS: 1 EraseBytes- 00:07:24.876 [2024-12-16 12:27:30.401339] ctrlr.c:1927:nvmf_ctrlr_set_features_reservation_persistence: *ERROR*: Set Features - Invalid Namespace ID or Reservation Configuration 00:07:24.876 [2024-12-16 12:27:30.401818] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE PERSIST cid:4 cdw10:00000083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.876 [2024-12-16 12:27:30.401844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.876 [2024-12-16 12:27:30.401904] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.876 [2024-12-16 12:27:30.401918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.876 [2024-12-16 12:27:30.401974] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000007c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.876 [2024-12-16 12:27:30.401988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.876 #45 NEW cov: 12541 ft: 15732 corp: 35/866b lim: 35 exec/s: 22 rss: 74Mb L: 22/35 MS: 1 ShuffleBytes- 00:07:24.876 #45 DONE cov: 12541 ft: 15732 corp: 35/866b lim: 35 exec/s: 22 rss: 74Mb 00:07:24.876 ###### Recommended dictionary. ###### 00:07:24.876 "\001\000\000\000\000\000\000\004" # Uses: 2 00:07:24.876 ###### End of recommended dictionary. ###### 00:07:24.876 Done 45 runs in 2 second(s) 00:07:25.136 12:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:25.136 12:27:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:25.136 12:27:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:25.136 12:27:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:25.136 12:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:25.136 12:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:25.136 12:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:25.136 12:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:25.136 12:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:25.136 12:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:25.136 12:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:25.136 12:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:25.136 12:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:25.136 12:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:25.136 12:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:25.136 12:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:25.136 12:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:25.136 12:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:25.136 12:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:25.136 [2024-12-16 12:27:30.558619] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:25.136 [2024-12-16 12:27:30.558670] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid990611 ] 00:07:25.395 [2024-12-16 12:27:30.738520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.395 [2024-12-16 12:27:30.773160] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.395 [2024-12-16 12:27:30.832613] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.395 [2024-12-16 12:27:30.848946] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:25.395 INFO: Running with entropic power schedule (0xFF, 100). 00:07:25.395 INFO: Seed: 580607796 00:07:25.395 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:25.395 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:25.395 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:25.395 INFO: A corpus is not provided, starting from an empty corpus 00:07:25.395 #2 INITED exec/s: 0 rss: 66Mb 00:07:25.395 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:25.395 This may also happen if the target rejected all inputs we tried so far 00:07:25.395 [2024-12-16 12:27:30.904634] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.395 [2024-12-16 12:27:30.904664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.395 [2024-12-16 12:27:30.904722] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.395 [2024-12-16 12:27:30.904737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.395 [2024-12-16 12:27:30.904792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.395 [2024-12-16 12:27:30.904806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.395 [2024-12-16 12:27:30.904864] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.395 [2024-12-16 12:27:30.904879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.962 NEW_FUNC[1/716]: 0x451268 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:25.962 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:25.962 #11 NEW cov: 12138 ft: 12137 corp: 2/34b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 4 CopyPart-ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:07:25.962 [2024-12-16 12:27:31.245402] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.962 [2024-12-16 12:27:31.245440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.962 [2024-12-16 12:27:31.245509] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.962 [2024-12-16 12:27:31.245527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.962 [2024-12-16 12:27:31.245591] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.962 [2024-12-16 12:27:31.245607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.962 #13 NEW cov: 12251 ft: 13117 corp: 3/60b lim: 35 exec/s: 0 rss: 72Mb L: 26/33 MS: 2 CrossOver-CrossOver- 00:07:25.962 [2024-12-16 12:27:31.285367] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.962 [2024-12-16 12:27:31.285396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.962 NEW_FUNC[1/1]: 0x471278 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:25.962 #14 NEW cov: 12271 ft: 13508 corp: 4/80b lim: 35 exec/s: 0 rss: 72Mb L: 20/33 MS: 1 InsertRepeatedBytes- 00:07:25.962 [2024-12-16 12:27:31.325529] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.962 [2024-12-16 12:27:31.325556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.962 [2024-12-16 12:27:31.325614] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.962 [2024-12-16 12:27:31.325626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.962 [2024-12-16 12:27:31.325644] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.962 [2024-12-16 12:27:31.325657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.962 #15 NEW cov: 12356 ft: 13931 corp: 5/104b lim: 35 exec/s: 0 rss: 72Mb L: 24/33 MS: 1 EraseBytes- 00:07:25.962 [2024-12-16 12:27:31.385901] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000126 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.962 [2024-12-16 12:27:31.385927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.962 [2024-12-16 12:27:31.385987] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.962 [2024-12-16 12:27:31.386004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.962 [2024-12-16 12:27:31.386061] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.962 [2024-12-16 12:27:31.386074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.962 #16 NEW cov: 12356 ft: 14052 corp: 6/136b lim: 35 exec/s: 0 rss: 73Mb L: 32/33 MS: 1 InsertRepeatedBytes- 00:07:25.962 [2024-12-16 12:27:31.445803] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.962 [2024-12-16 12:27:31.445828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.962 [2024-12-16 12:27:31.445888] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.962 [2024-12-16 12:27:31.445902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.962 [2024-12-16 12:27:31.445957] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.962 [2024-12-16 12:27:31.445971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.962 #17 NEW cov: 12356 ft: 14103 corp: 7/162b lim: 35 exec/s: 0 rss: 73Mb L: 26/33 MS: 1 ShuffleBytes- 00:07:25.963 [2024-12-16 12:27:31.506140] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.963 [2024-12-16 12:27:31.506166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.963 [2024-12-16 12:27:31.506226] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.963 [2024-12-16 12:27:31.506240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.963 [2024-12-16 12:27:31.506297] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.963 [2024-12-16 12:27:31.506311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.963 [2024-12-16 12:27:31.506369] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.963 [2024-12-16 12:27:31.506382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.221 #18 NEW cov: 12356 ft: 14137 corp: 8/193b lim: 35 exec/s: 0 rss: 73Mb L: 31/33 MS: 1 InsertRepeatedBytes- 00:07:26.221 [2024-12-16 12:27:31.546069] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.221 [2024-12-16 12:27:31.546094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.221 #19 NEW cov: 12356 ft: 14170 corp: 9/213b lim: 35 exec/s: 0 rss: 73Mb L: 20/33 MS: 1 ChangeBit- 00:07:26.221 [2024-12-16 12:27:31.586160] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.221 [2024-12-16 12:27:31.586185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.221 #20 NEW cov: 12356 ft: 14267 corp: 10/233b lim: 35 exec/s: 0 rss: 73Mb L: 20/33 MS: 1 ChangeByte- 00:07:26.221 [2024-12-16 12:27:31.646485] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.221 [2024-12-16 12:27:31.646511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.221 [2024-12-16 12:27:31.646576] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.221 [2024-12-16 12:27:31.646590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.221 [2024-12-16 12:27:31.646650] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.221 [2024-12-16 12:27:31.646664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.221 #21 NEW cov: 12356 ft: 14316 corp: 11/257b lim: 35 exec/s: 0 rss: 73Mb L: 24/33 MS: 1 ChangeBinInt- 00:07:26.221 [2024-12-16 12:27:31.706553] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.221 [2024-12-16 12:27:31.706578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.221 [2024-12-16 12:27:31.706642] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.221 [2024-12-16 12:27:31.706657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.221 [2024-12-16 12:27:31.706713] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.221 [2024-12-16 12:27:31.706726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.221 #22 NEW cov: 12356 ft: 14371 corp: 12/283b lim: 35 exec/s: 0 rss: 73Mb L: 26/33 MS: 1 CrossOver- 00:07:26.221 [2024-12-16 12:27:31.746877] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000126 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.221 [2024-12-16 12:27:31.746904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.221 [2024-12-16 12:27:31.746965] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.221 [2024-12-16 12:27:31.746979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.221 [2024-12-16 12:27:31.747040] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.221 [2024-12-16 12:27:31.747054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.480 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:26.480 #23 NEW cov: 12379 ft: 14415 corp: 13/316b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 InsertByte- 00:07:26.480 [2024-12-16 12:27:31.806935] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.480 [2024-12-16 12:27:31.806961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.480 [2024-12-16 12:27:31.807025] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.480 [2024-12-16 12:27:31.807038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.480 #24 NEW cov: 12379 ft: 14534 corp: 14/337b lim: 35 exec/s: 0 rss: 73Mb L: 21/33 MS: 1 InsertByte- 00:07:26.480 [2024-12-16 12:27:31.866968] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.480 [2024-12-16 12:27:31.866994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.480 #25 NEW cov: 12379 ft: 14572 corp: 15/354b lim: 35 exec/s: 0 rss: 73Mb L: 17/33 MS: 1 EraseBytes- 00:07:26.480 [2024-12-16 12:27:31.907285] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.480 [2024-12-16 12:27:31.907311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.480 [2024-12-16 12:27:31.907377] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.480 [2024-12-16 12:27:31.907391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.480 [2024-12-16 12:27:31.907452] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000000c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.480 [2024-12-16 12:27:31.907466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.480 [2024-12-16 12:27:31.907528] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.480 [2024-12-16 12:27:31.907542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.480 #26 NEW cov: 12379 ft: 14579 corp: 16/385b lim: 35 exec/s: 26 rss: 73Mb L: 31/33 MS: 1 CMP- DE: "\001\000\000\273"- 00:07:26.480 [2024-12-16 12:27:31.967293] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.480 [2024-12-16 12:27:31.967317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.480 [2024-12-16 12:27:31.967379] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000072a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.480 [2024-12-16 12:27:31.967393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.480 [2024-12-16 12:27:31.967449] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.480 [2024-12-16 12:27:31.967463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.480 #27 NEW cov: 12379 ft: 14591 corp: 17/409b lim: 35 exec/s: 27 rss: 73Mb L: 24/33 MS: 1 ChangeByte- 00:07:26.480 [2024-12-16 12:27:32.027606] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.480 [2024-12-16 12:27:32.027635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.480 [2024-12-16 12:27:32.027696] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.480 [2024-12-16 12:27:32.027710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.480 [2024-12-16 12:27:32.027770] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000000c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.480 [2024-12-16 12:27:32.027783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.480 [2024-12-16 12:27:32.027842] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.480 [2024-12-16 12:27:32.027855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.739 #28 NEW cov: 12379 ft: 14665 corp: 18/443b lim: 35 exec/s: 28 rss: 73Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:26.739 [2024-12-16 12:27:32.087894] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000126 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.739 [2024-12-16 12:27:32.087924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.739 [2024-12-16 12:27:32.087988] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.739 [2024-12-16 12:27:32.088002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.739 [2024-12-16 12:27:32.088062] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.739 [2024-12-16 12:27:32.088077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.739 #29 NEW cov: 12379 ft: 14721 corp: 19/476b lim: 35 exec/s: 29 rss: 74Mb L: 33/34 MS: 1 ChangeBit- 00:07:26.739 [2024-12-16 12:27:32.147583] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.739 [2024-12-16 12:27:32.147607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.739 #30 NEW cov: 12379 ft: 14737 corp: 20/491b lim: 35 exec/s: 30 rss: 74Mb L: 15/34 MS: 1 EraseBytes- 00:07:26.739 [2024-12-16 12:27:32.188228] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.739 [2024-12-16 12:27:32.188253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.739 [2024-12-16 12:27:32.188316] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.739 [2024-12-16 12:27:32.188331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.739 [2024-12-16 12:27:32.188390] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005bf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.739 [2024-12-16 12:27:32.188404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.739 [2024-12-16 12:27:32.188462] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.739 [2024-12-16 12:27:32.188477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.739 [2024-12-16 12:27:32.188535] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.739 [2024-12-16 12:27:32.188549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:26.739 #31 NEW cov: 12379 ft: 14778 corp: 21/526b lim: 35 exec/s: 31 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:26.739 [2024-12-16 12:27:32.248289] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.739 [2024-12-16 12:27:32.248314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.739 [2024-12-16 12:27:32.248376] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.739 [2024-12-16 12:27:32.248390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.739 [2024-12-16 12:27:32.248450] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000020 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.739 [2024-12-16 12:27:32.248463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.739 #32 NEW cov: 12379 ft: 14803 corp: 22/560b lim: 35 exec/s: 32 rss: 74Mb L: 34/35 MS: 1 CopyPart- 00:07:26.739 [2024-12-16 12:27:32.288245] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.739 [2024-12-16 12:27:32.288270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.739 [2024-12-16 12:27:32.288333] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.739 [2024-12-16 12:27:32.288346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.739 [2024-12-16 12:27:32.288408] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.739 [2024-12-16 12:27:32.288422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.998 #33 NEW cov: 12379 ft: 14818 corp: 23/584b lim: 35 exec/s: 33 rss: 74Mb L: 24/35 MS: 1 CrossOver- 00:07:26.998 [2024-12-16 12:27:32.348313] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.998 [2024-12-16 12:27:32.348338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.998 #34 NEW cov: 12379 ft: 14891 corp: 24/604b lim: 35 exec/s: 34 rss: 74Mb L: 20/35 MS: 1 ShuffleBytes- 00:07:26.998 [2024-12-16 12:27:32.388590] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.998 [2024-12-16 12:27:32.388619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.998 [2024-12-16 12:27:32.388678] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.998 [2024-12-16 12:27:32.388692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.998 #35 NEW cov: 12379 ft: 14942 corp: 25/629b lim: 35 exec/s: 35 rss: 74Mb L: 25/35 MS: 1 CrossOver- 00:07:26.998 [2024-12-16 12:27:32.428549] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.998 [2024-12-16 12:27:32.428575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.998 #36 NEW cov: 12379 ft: 14959 corp: 26/645b lim: 35 exec/s: 36 rss: 74Mb L: 16/35 MS: 1 EraseBytes- 00:07:26.998 [2024-12-16 12:27:32.468878] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.998 [2024-12-16 12:27:32.468903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.998 [2024-12-16 12:27:32.468962] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.998 [2024-12-16 12:27:32.468977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.998 [2024-12-16 12:27:32.469037] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.998 [2024-12-16 12:27:32.469050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.998 [2024-12-16 12:27:32.469109] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.998 [2024-12-16 12:27:32.469122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.998 #37 NEW cov: 12379 ft: 14969 corp: 27/675b lim: 35 exec/s: 37 rss: 74Mb L: 30/35 MS: 1 PersAutoDict- DE: "\001\000\000\273"- 00:07:26.998 [2024-12-16 12:27:32.529034] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.998 [2024-12-16 12:27:32.529059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.998 [2024-12-16 12:27:32.529119] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.998 [2024-12-16 12:27:32.529133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.998 [2024-12-16 12:27:32.529188] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.998 [2024-12-16 12:27:32.529202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.998 [2024-12-16 12:27:32.529258] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.998 [2024-12-16 12:27:32.529271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.998 #38 NEW cov: 12379 ft: 15085 corp: 28/707b lim: 35 exec/s: 38 rss: 74Mb L: 32/35 MS: 1 InsertByte- 00:07:27.257 [2024-12-16 12:27:32.569084] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.257 [2024-12-16 12:27:32.569110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.257 [2024-12-16 12:27:32.569169] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.257 [2024-12-16 12:27:32.569183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.257 #40 NEW cov: 12379 ft: 15090 corp: 29/728b lim: 35 exec/s: 40 rss: 74Mb L: 21/35 MS: 2 InsertByte-CrossOver- 00:07:27.257 [2024-12-16 12:27:32.609166] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000034 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.257 [2024-12-16 12:27:32.609191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.257 [2024-12-16 12:27:32.609246] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.257 [2024-12-16 12:27:32.609260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.257 [2024-12-16 12:27:32.609316] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.257 [2024-12-16 12:27:32.609329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.257 #41 NEW cov: 12379 ft: 15113 corp: 30/752b lim: 35 exec/s: 41 rss: 74Mb L: 24/35 MS: 1 ChangeBinInt- 00:07:27.257 [2024-12-16 12:27:32.649439] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000126 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.257 [2024-12-16 12:27:32.649465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.257 [2024-12-16 12:27:32.649524] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.257 [2024-12-16 12:27:32.649538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.257 [2024-12-16 12:27:32.649594] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.257 [2024-12-16 12:27:32.649614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.257 #42 NEW cov: 12379 ft: 15122 corp: 31/784b lim: 35 exec/s: 42 rss: 74Mb L: 32/35 MS: 1 ShuffleBytes- 00:07:27.257 [2024-12-16 12:27:32.689497] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.257 [2024-12-16 12:27:32.689522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.257 [2024-12-16 12:27:32.689584] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.257 [2024-12-16 12:27:32.689598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.257 [2024-12-16 12:27:32.689656] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000000c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.257 [2024-12-16 12:27:32.689670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.257 [2024-12-16 12:27:32.689726] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.257 [2024-12-16 12:27:32.689739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.257 #43 NEW cov: 12379 ft: 15130 corp: 32/815b lim: 35 exec/s: 43 rss: 74Mb L: 31/35 MS: 1 ChangeBinInt- 00:07:27.257 [2024-12-16 12:27:32.729594] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.257 [2024-12-16 12:27:32.729624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.257 [2024-12-16 12:27:32.729680] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.257 [2024-12-16 12:27:32.729694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.257 #44 NEW cov: 12379 ft: 15143 corp: 33/836b lim: 35 exec/s: 44 rss: 74Mb L: 21/35 MS: 1 ChangeByte- 00:07:27.257 [2024-12-16 12:27:32.789604] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.257 [2024-12-16 12:27:32.789634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.257 [2024-12-16 12:27:32.789695] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.258 [2024-12-16 12:27:32.789716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.258 [2024-12-16 12:27:32.789773] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.258 [2024-12-16 12:27:32.789786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.258 #45 NEW cov: 12379 ft: 15155 corp: 34/859b lim: 35 exec/s: 45 rss: 74Mb L: 23/35 MS: 1 EraseBytes- 00:07:27.517 [2024-12-16 12:27:32.829837] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.517 [2024-12-16 12:27:32.829862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.517 [2024-12-16 12:27:32.829922] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.517 [2024-12-16 12:27:32.829935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.517 [2024-12-16 12:27:32.829993] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.517 [2024-12-16 12:27:32.830011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.517 [2024-12-16 12:27:32.830069] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.517 [2024-12-16 12:27:32.830082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.517 #46 NEW cov: 12379 ft: 15174 corp: 35/893b lim: 35 exec/s: 46 rss: 74Mb L: 34/35 MS: 1 PersAutoDict- DE: "\001\000\000\273"- 00:07:27.517 [2024-12-16 12:27:32.889945] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.517 [2024-12-16 12:27:32.889971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.517 [2024-12-16 12:27:32.890028] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.517 [2024-12-16 12:27:32.890042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.517 #47 NEW cov: 12379 ft: 15178 corp: 36/914b lim: 35 exec/s: 23 rss: 74Mb L: 21/35 MS: 1 ChangeBit- 00:07:27.517 #47 DONE cov: 12379 ft: 15178 corp: 36/914b lim: 35 exec/s: 23 rss: 74Mb 00:07:27.517 ###### Recommended dictionary. ###### 00:07:27.517 "\001\000\000\273" # Uses: 2 00:07:27.517 ###### End of recommended dictionary. ###### 00:07:27.517 Done 47 runs in 2 second(s) 00:07:27.517 12:27:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:27.517 12:27:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:27.517 12:27:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:27.517 12:27:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:27.517 12:27:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:27.517 12:27:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:27.517 12:27:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:27.517 12:27:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:27.517 12:27:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:27.517 12:27:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:27.517 12:27:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:27.517 12:27:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:27.517 12:27:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:27.517 12:27:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:27.517 12:27:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:27.517 12:27:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:27.517 12:27:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:27.517 12:27:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:27.517 12:27:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:27.517 [2024-12-16 12:27:33.080119] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:27.517 [2024-12-16 12:27:33.080195] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid991142 ] 00:07:27.776 [2024-12-16 12:27:33.263866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.776 [2024-12-16 12:27:33.296605] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.036 [2024-12-16 12:27:33.355688] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.036 [2024-12-16 12:27:33.372011] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:28.036 INFO: Running with entropic power schedule (0xFF, 100). 00:07:28.036 INFO: Seed: 3101620997 00:07:28.036 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:28.036 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:28.036 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:28.036 INFO: A corpus is not provided, starting from an empty corpus 00:07:28.036 #2 INITED exec/s: 0 rss: 65Mb 00:07:28.036 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:28.036 This may also happen if the target rejected all inputs we tried so far 00:07:28.037 [2024-12-16 12:27:33.431252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18229723551135235324 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.037 [2024-12-16 12:27:33.431282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.037 [2024-12-16 12:27:33.431320] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.037 [2024-12-16 12:27:33.431338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.037 [2024-12-16 12:27:33.431394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.037 [2024-12-16 12:27:33.431411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.296 NEW_FUNC[1/717]: 0x452728 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:28.296 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:28.296 #24 NEW cov: 12242 ft: 12243 corp: 2/80b lim: 105 exec/s: 0 rss: 72Mb L: 79/79 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:28.296 [2024-12-16 12:27:33.762319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18229723551135235324 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.296 [2024-12-16 12:27:33.762400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.296 [2024-12-16 12:27:33.762511] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.296 [2024-12-16 12:27:33.762554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.296 #25 NEW cov: 12355 ft: 13294 corp: 3/136b lim: 105 exec/s: 0 rss: 72Mb L: 56/79 MS: 1 EraseBytes- 00:07:28.296 [2024-12-16 12:27:33.832306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.296 [2024-12-16 12:27:33.832334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.296 [2024-12-16 12:27:33.832381] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.296 [2024-12-16 12:27:33.832400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.296 [2024-12-16 12:27:33.832455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.296 [2024-12-16 12:27:33.832472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.296 [2024-12-16 12:27:33.832527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.296 [2024-12-16 12:27:33.832542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.296 #26 NEW cov: 12361 ft: 13977 corp: 4/223b lim: 105 exec/s: 0 rss: 72Mb L: 87/87 MS: 1 InsertRepeatedBytes- 00:07:28.555 [2024-12-16 12:27:33.872184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073659023359 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.555 [2024-12-16 12:27:33.872214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.555 [2024-12-16 12:27:33.872271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.555 [2024-12-16 12:27:33.872287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.555 #31 NEW cov: 12446 ft: 14245 corp: 5/269b lim: 105 exec/s: 0 rss: 72Mb L: 46/87 MS: 5 CrossOver-CopyPart-CopyPart-EraseBytes-InsertRepeatedBytes- 00:07:28.555 [2024-12-16 12:27:33.912536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.555 [2024-12-16 12:27:33.912564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.555 [2024-12-16 12:27:33.912615] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744060774120703 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.555 [2024-12-16 12:27:33.912632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.555 [2024-12-16 12:27:33.912687] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.555 [2024-12-16 12:27:33.912704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.555 [2024-12-16 12:27:33.912761] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.555 [2024-12-16 12:27:33.912776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.555 #32 NEW cov: 12446 ft: 14434 corp: 6/360b lim: 105 exec/s: 0 rss: 72Mb L: 91/91 MS: 1 InsertRepeatedBytes- 00:07:28.555 [2024-12-16 12:27:33.972512] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.555 [2024-12-16 12:27:33.972541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.555 [2024-12-16 12:27:33.972582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.555 [2024-12-16 12:27:33.972597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.555 #33 NEW cov: 12446 ft: 14527 corp: 7/409b lim: 105 exec/s: 0 rss: 72Mb L: 49/91 MS: 1 EraseBytes- 00:07:28.555 [2024-12-16 12:27:34.012474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.555 [2024-12-16 12:27:34.012502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.555 #39 NEW cov: 12446 ft: 15005 corp: 8/445b lim: 105 exec/s: 0 rss: 72Mb L: 36/91 MS: 1 EraseBytes- 00:07:28.555 [2024-12-16 12:27:34.072994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18229723551135235324 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.555 [2024-12-16 12:27:34.073022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.555 [2024-12-16 12:27:34.073067] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.555 [2024-12-16 12:27:34.073083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.555 [2024-12-16 12:27:34.073138] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.555 [2024-12-16 12:27:34.073154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.555 [2024-12-16 12:27:34.073208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.555 [2024-12-16 12:27:34.073225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.555 #40 NEW cov: 12446 ft: 15096 corp: 9/532b lim: 105 exec/s: 0 rss: 72Mb L: 87/91 MS: 1 CopyPart- 00:07:28.555 [2024-12-16 12:27:34.113100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.555 [2024-12-16 12:27:34.113127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.555 [2024-12-16 12:27:34.113178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.555 [2024-12-16 12:27:34.113193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.555 [2024-12-16 12:27:34.113245] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.555 [2024-12-16 12:27:34.113261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.555 [2024-12-16 12:27:34.113313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.555 [2024-12-16 12:27:34.113328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.815 #41 NEW cov: 12446 ft: 15108 corp: 10/619b lim: 105 exec/s: 0 rss: 72Mb L: 87/91 MS: 1 ChangeByte- 00:07:28.815 [2024-12-16 12:27:34.152954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18229723554927803644 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.815 [2024-12-16 12:27:34.152982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.815 [2024-12-16 12:27:34.153034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.815 [2024-12-16 12:27:34.153050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.815 #43 NEW cov: 12446 ft: 15146 corp: 11/670b lim: 105 exec/s: 0 rss: 72Mb L: 51/91 MS: 2 ChangeByte-CrossOver- 00:07:28.815 [2024-12-16 12:27:34.193099] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.815 [2024-12-16 12:27:34.193127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.815 [2024-12-16 12:27:34.193177] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.815 [2024-12-16 12:27:34.193194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.815 #49 NEW cov: 12446 ft: 15174 corp: 12/721b lim: 105 exec/s: 0 rss: 72Mb L: 51/91 MS: 1 CMP- DE: "\025\001"- 00:07:28.815 [2024-12-16 12:27:34.233211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073659023359 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.815 [2024-12-16 12:27:34.233240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.815 [2024-12-16 12:27:34.233283] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.815 [2024-12-16 12:27:34.233298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.815 #50 NEW cov: 12446 ft: 15264 corp: 13/767b lim: 105 exec/s: 0 rss: 72Mb L: 46/91 MS: 1 ChangeBit- 00:07:28.815 [2024-12-16 12:27:34.293350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.815 [2024-12-16 12:27:34.293379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.815 [2024-12-16 12:27:34.293435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.815 [2024-12-16 12:27:34.293453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.815 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:28.815 #51 NEW cov: 12469 ft: 15297 corp: 14/816b lim: 105 exec/s: 0 rss: 73Mb L: 49/91 MS: 1 PersAutoDict- DE: "\025\001"- 00:07:28.815 [2024-12-16 12:27:34.333580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18229723551135235324 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.815 [2024-12-16 12:27:34.333608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.815 [2024-12-16 12:27:34.333659] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.815 [2024-12-16 12:27:34.333675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.815 [2024-12-16 12:27:34.333729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.815 [2024-12-16 12:27:34.333745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.815 #52 NEW cov: 12469 ft: 15334 corp: 15/895b lim: 105 exec/s: 0 rss: 73Mb L: 79/91 MS: 1 ChangeByte- 00:07:28.815 [2024-12-16 12:27:34.373819] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18229723554927803644 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.815 [2024-12-16 12:27:34.373849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.815 [2024-12-16 12:27:34.373893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.815 [2024-12-16 12:27:34.373909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.815 [2024-12-16 12:27:34.373966] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.815 [2024-12-16 12:27:34.373982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.816 [2024-12-16 12:27:34.374036] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.816 [2024-12-16 12:27:34.374052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.075 #53 NEW cov: 12469 ft: 15359 corp: 16/986b lim: 105 exec/s: 53 rss: 73Mb L: 91/91 MS: 1 CrossOver- 00:07:29.075 [2024-12-16 12:27:34.433769] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18229723551135235324 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.075 [2024-12-16 12:27:34.433797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.075 [2024-12-16 12:27:34.433837] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723551940541692 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.075 [2024-12-16 12:27:34.433854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.075 #54 NEW cov: 12469 ft: 15403 corp: 17/1043b lim: 105 exec/s: 54 rss: 73Mb L: 57/91 MS: 1 InsertByte- 00:07:29.075 [2024-12-16 12:27:34.493797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.075 [2024-12-16 12:27:34.493825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.075 #55 NEW cov: 12469 ft: 15414 corp: 18/1079b lim: 105 exec/s: 55 rss: 73Mb L: 36/91 MS: 1 ShuffleBytes- 00:07:29.075 [2024-12-16 12:27:34.554077] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18229723555229793532 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.075 [2024-12-16 12:27:34.554104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.075 [2024-12-16 12:27:34.554150] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.075 [2024-12-16 12:27:34.554167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.075 #61 NEW cov: 12469 ft: 15434 corp: 19/1121b lim: 105 exec/s: 61 rss: 73Mb L: 42/91 MS: 1 EraseBytes- 00:07:29.075 [2024-12-16 12:27:34.594198] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18229722743681383676 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.075 [2024-12-16 12:27:34.594226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.075 [2024-12-16 12:27:34.594281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.075 [2024-12-16 12:27:34.594298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.075 #62 NEW cov: 12469 ft: 15460 corp: 20/1178b lim: 105 exec/s: 62 rss: 73Mb L: 57/91 MS: 1 InsertByte- 00:07:29.075 [2024-12-16 12:27:34.634334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18229723551135235324 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.075 [2024-12-16 12:27:34.634361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.075 [2024-12-16 12:27:34.634401] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723551940541692 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.075 [2024-12-16 12:27:34.634417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.334 #63 NEW cov: 12469 ft: 15486 corp: 21/1235b lim: 105 exec/s: 63 rss: 73Mb L: 57/91 MS: 1 ChangeBit- 00:07:29.334 [2024-12-16 12:27:34.694501] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.334 [2024-12-16 12:27:34.694528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.334 [2024-12-16 12:27:34.694571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.334 [2024-12-16 12:27:34.694587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.334 #64 NEW cov: 12469 ft: 15524 corp: 22/1284b lim: 105 exec/s: 64 rss: 73Mb L: 49/91 MS: 1 ChangeBinInt- 00:07:29.334 [2024-12-16 12:27:34.754809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18229723551135235324 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.334 [2024-12-16 12:27:34.754837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.334 [2024-12-16 12:27:34.754876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.334 [2024-12-16 12:27:34.754891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.334 [2024-12-16 12:27:34.754947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.334 [2024-12-16 12:27:34.754963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.334 #65 NEW cov: 12469 ft: 15618 corp: 23/1364b lim: 105 exec/s: 65 rss: 73Mb L: 80/91 MS: 1 InsertByte- 00:07:29.334 [2024-12-16 12:27:34.814986] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.334 [2024-12-16 12:27:34.815013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.334 [2024-12-16 12:27:34.815060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.334 [2024-12-16 12:27:34.815077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.334 [2024-12-16 12:27:34.815133] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.334 [2024-12-16 12:27:34.815149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.334 #66 NEW cov: 12469 ft: 15632 corp: 24/1442b lim: 105 exec/s: 66 rss: 73Mb L: 78/91 MS: 1 InsertRepeatedBytes- 00:07:29.334 [2024-12-16 12:27:34.854844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.334 [2024-12-16 12:27:34.854874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.334 #67 NEW cov: 12469 ft: 15681 corp: 25/1479b lim: 105 exec/s: 67 rss: 73Mb L: 37/91 MS: 1 InsertByte- 00:07:29.594 [2024-12-16 12:27:34.915248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.594 [2024-12-16 12:27:34.915275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.594 [2024-12-16 12:27:34.915315] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4244373504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.594 [2024-12-16 12:27:34.915331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.594 [2024-12-16 12:27:34.915389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.594 [2024-12-16 12:27:34.915405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.594 #68 NEW cov: 12469 ft: 15712 corp: 26/1554b lim: 105 exec/s: 68 rss: 73Mb L: 75/91 MS: 1 InsertRepeatedBytes- 00:07:29.594 [2024-12-16 12:27:34.955343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.594 [2024-12-16 12:27:34.955371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.594 [2024-12-16 12:27:34.955417] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11068046445893188860 len:39322 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.594 [2024-12-16 12:27:34.955434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.594 [2024-12-16 12:27:34.955490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11068046444225730969 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.594 [2024-12-16 12:27:34.955506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.594 #69 NEW cov: 12469 ft: 15740 corp: 27/1629b lim: 105 exec/s: 69 rss: 73Mb L: 75/91 MS: 1 InsertRepeatedBytes- 00:07:29.594 [2024-12-16 12:27:34.995343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18229723551135235324 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.594 [2024-12-16 12:27:34.995370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.594 [2024-12-16 12:27:34.995410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.594 [2024-12-16 12:27:34.995426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.594 #70 NEW cov: 12469 ft: 15753 corp: 28/1685b lim: 105 exec/s: 70 rss: 73Mb L: 56/91 MS: 1 PersAutoDict- DE: "\025\001"- 00:07:29.594 [2024-12-16 12:27:35.035701] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.594 [2024-12-16 12:27:35.035729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.594 [2024-12-16 12:27:35.035781] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:2813 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.594 [2024-12-16 12:27:35.035797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.594 [2024-12-16 12:27:35.035853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18229723555195321596 len:15101 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.594 [2024-12-16 12:27:35.035868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.594 [2024-12-16 12:27:35.035922] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18229723005439507708 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.594 [2024-12-16 12:27:35.035937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.594 #71 NEW cov: 12469 ft: 15776 corp: 29/1784b lim: 105 exec/s: 71 rss: 73Mb L: 99/99 MS: 1 CrossOver- 00:07:29.594 [2024-12-16 12:27:35.075698] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18229723554927803644 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.594 [2024-12-16 12:27:35.075725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.594 [2024-12-16 12:27:35.075771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.594 [2024-12-16 12:27:35.075787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.594 [2024-12-16 12:27:35.075841] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.594 [2024-12-16 12:27:35.075855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.594 #72 NEW cov: 12469 ft: 15792 corp: 30/1851b lim: 105 exec/s: 72 rss: 73Mb L: 67/99 MS: 1 EraseBytes- 00:07:29.594 [2024-12-16 12:27:35.135981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.594 [2024-12-16 12:27:35.136007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.594 [2024-12-16 12:27:35.136057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744060774120703 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.594 [2024-12-16 12:27:35.136073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.594 [2024-12-16 12:27:35.136125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.594 [2024-12-16 12:27:35.136141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.594 [2024-12-16 12:27:35.136195] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.594 [2024-12-16 12:27:35.136211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.853 #73 NEW cov: 12469 ft: 15859 corp: 31/1944b lim: 105 exec/s: 73 rss: 74Mb L: 93/99 MS: 1 PersAutoDict- DE: "\025\001"- 00:07:29.853 [2024-12-16 12:27:35.196143] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18229723551135235324 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.853 [2024-12-16 12:27:35.196170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.853 [2024-12-16 12:27:35.196222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.853 [2024-12-16 12:27:35.196238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.853 [2024-12-16 12:27:35.196296] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18175398884690164988 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.853 [2024-12-16 12:27:35.196312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.853 [2024-12-16 12:27:35.196368] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.853 [2024-12-16 12:27:35.196384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.853 #74 NEW cov: 12469 ft: 15881 corp: 32/2031b lim: 105 exec/s: 74 rss: 74Mb L: 87/99 MS: 1 ChangeByte- 00:07:29.853 [2024-12-16 12:27:35.256072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18229723551135235324 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.853 [2024-12-16 12:27:35.256100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.853 [2024-12-16 12:27:35.256150] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555182607612 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.853 [2024-12-16 12:27:35.256166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.853 #75 NEW cov: 12469 ft: 15884 corp: 33/2089b lim: 105 exec/s: 75 rss: 74Mb L: 58/99 MS: 1 InsertByte- 00:07:29.853 [2024-12-16 12:27:35.296378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.853 [2024-12-16 12:27:35.296406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.853 [2024-12-16 12:27:35.296453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.853 [2024-12-16 12:27:35.296468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.853 [2024-12-16 12:27:35.296519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.853 [2024-12-16 12:27:35.296534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.853 [2024-12-16 12:27:35.296586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.853 [2024-12-16 12:27:35.296601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.853 #76 NEW cov: 12469 ft: 15895 corp: 34/2176b lim: 105 exec/s: 76 rss: 74Mb L: 87/99 MS: 1 ChangeBit- 00:07:29.853 [2024-12-16 12:27:35.336365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18229723554927803644 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.853 [2024-12-16 12:27:35.336391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.853 [2024-12-16 12:27:35.336438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18229723555195321596 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.853 [2024-12-16 12:27:35.336454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.853 [2024-12-16 12:27:35.336508] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18229723552309640444 len:64765 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.853 [2024-12-16 12:27:35.336527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.853 #77 NEW cov: 12469 ft: 15913 corp: 35/2243b lim: 105 exec/s: 77 rss: 74Mb L: 67/99 MS: 1 ChangeByte- 00:07:29.853 [2024-12-16 12:27:35.396448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17870283321355599871 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.853 [2024-12-16 12:27:35.396475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.853 [2024-12-16 12:27:35.396514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.853 [2024-12-16 12:27:35.396530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.112 #78 NEW cov: 12469 ft: 15918 corp: 36/2289b lim: 105 exec/s: 39 rss: 74Mb L: 46/99 MS: 1 ChangeBit- 00:07:30.112 #78 DONE cov: 12469 ft: 15918 corp: 36/2289b lim: 105 exec/s: 39 rss: 74Mb 00:07:30.112 ###### Recommended dictionary. ###### 00:07:30.112 "\025\001" # Uses: 4 00:07:30.112 ###### End of recommended dictionary. ###### 00:07:30.112 Done 78 runs in 2 second(s) 00:07:30.112 12:27:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:30.112 12:27:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:30.112 12:27:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:30.112 12:27:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:30.112 12:27:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:30.112 12:27:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:30.112 12:27:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:30.112 12:27:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:30.112 12:27:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:30.112 12:27:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:30.112 12:27:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:30.112 12:27:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:30.112 12:27:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:30.112 12:27:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:30.112 12:27:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:30.112 12:27:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:30.112 12:27:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:30.112 12:27:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:30.112 12:27:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:07:30.112 [2024-12-16 12:27:35.568230] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:30.112 [2024-12-16 12:27:35.568297] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid991462 ] 00:07:30.371 [2024-12-16 12:27:35.758487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.371 [2024-12-16 12:27:35.793478] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.371 [2024-12-16 12:27:35.852665] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.371 [2024-12-16 12:27:35.868988] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:30.371 INFO: Running with entropic power schedule (0xFF, 100). 00:07:30.371 INFO: Seed: 1305629891 00:07:30.371 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:30.371 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:30.371 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:30.371 INFO: A corpus is not provided, starting from an empty corpus 00:07:30.371 #2 INITED exec/s: 0 rss: 65Mb 00:07:30.371 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:30.371 This may also happen if the target rejected all inputs we tried so far 00:07:30.371 [2024-12-16 12:27:35.924822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.371 [2024-12-16 12:27:35.924854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.371 [2024-12-16 12:27:35.924893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.371 [2024-12-16 12:27:35.924910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.371 [2024-12-16 12:27:35.924963] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.371 [2024-12-16 12:27:35.924980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.371 [2024-12-16 12:27:35.925033] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.371 [2024-12-16 12:27:35.925047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.371 [2024-12-16 12:27:35.925102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.371 [2024-12-16 12:27:35.925118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:30.905 NEW_FUNC[1/718]: 0x455aa8 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:30.905 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:30.905 #8 NEW cov: 12263 ft: 12261 corp: 2/121b lim: 120 exec/s: 0 rss: 72Mb L: 120/120 MS: 1 InsertRepeatedBytes- 00:07:30.905 [2024-12-16 12:27:36.245645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.905 [2024-12-16 12:27:36.245679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.905 [2024-12-16 12:27:36.245726] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.905 [2024-12-16 12:27:36.245743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.905 [2024-12-16 12:27:36.245793] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.905 [2024-12-16 12:27:36.245810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.905 [2024-12-16 12:27:36.245864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.905 [2024-12-16 12:27:36.245879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.905 [2024-12-16 12:27:36.245932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.905 [2024-12-16 12:27:36.245949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:30.905 #9 NEW cov: 12376 ft: 12828 corp: 3/241b lim: 120 exec/s: 0 rss: 72Mb L: 120/120 MS: 1 CrossOver- 00:07:30.905 [2024-12-16 12:27:36.305739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.905 [2024-12-16 12:27:36.305769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.905 [2024-12-16 12:27:36.305811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.905 [2024-12-16 12:27:36.305826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.905 [2024-12-16 12:27:36.305878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.905 [2024-12-16 12:27:36.305894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.905 [2024-12-16 12:27:36.305946] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.905 [2024-12-16 12:27:36.305961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.905 [2024-12-16 12:27:36.306015] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.905 [2024-12-16 12:27:36.306030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:30.905 #10 NEW cov: 12382 ft: 13033 corp: 4/361b lim: 120 exec/s: 0 rss: 72Mb L: 120/120 MS: 1 CrossOver- 00:07:30.905 [2024-12-16 12:27:36.365874] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.905 [2024-12-16 12:27:36.365903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.905 [2024-12-16 12:27:36.365952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.905 [2024-12-16 12:27:36.365968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.905 [2024-12-16 12:27:36.366021] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.905 [2024-12-16 12:27:36.366037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.905 [2024-12-16 12:27:36.366090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.905 [2024-12-16 12:27:36.366106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.905 [2024-12-16 12:27:36.366160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.905 [2024-12-16 12:27:36.366178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:30.905 #11 NEW cov: 12467 ft: 13300 corp: 5/481b lim: 120 exec/s: 0 rss: 72Mb L: 120/120 MS: 1 ChangeBinInt- 00:07:30.905 [2024-12-16 12:27:36.405989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.905 [2024-12-16 12:27:36.406016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.905 [2024-12-16 12:27:36.406055] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.905 [2024-12-16 12:27:36.406072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.905 [2024-12-16 12:27:36.406124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.905 [2024-12-16 12:27:36.406141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.905 [2024-12-16 12:27:36.406191] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.905 [2024-12-16 12:27:36.406207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.905 [2024-12-16 12:27:36.406260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.905 [2024-12-16 12:27:36.406276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:30.905 #12 NEW cov: 12467 ft: 13420 corp: 6/601b lim: 120 exec/s: 0 rss: 72Mb L: 120/120 MS: 1 ChangeByte- 00:07:31.203 [2024-12-16 12:27:36.465850] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:17216961132006141678 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.203 [2024-12-16 12:27:36.465879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.203 [2024-12-16 12:27:36.465915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:17216961135462248174 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.203 [2024-12-16 12:27:36.465932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.203 [2024-12-16 12:27:36.465988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:17216961135462248174 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.203 [2024-12-16 12:27:36.466008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.203 #17 NEW cov: 12467 ft: 13907 corp: 7/690b lim: 120 exec/s: 0 rss: 72Mb L: 89/120 MS: 5 CrossOver-ChangeByte-ChangeBit-ChangeBit-InsertRepeatedBytes- 00:07:31.203 [2024-12-16 12:27:36.505974] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:17216961132006141678 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.203 [2024-12-16 12:27:36.506003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.203 [2024-12-16 12:27:36.506041] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:17216961135462248174 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.203 [2024-12-16 12:27:36.506056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.203 [2024-12-16 12:27:36.506114] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.203 [2024-12-16 12:27:36.506131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.203 #18 NEW cov: 12467 ft: 13972 corp: 8/781b lim: 120 exec/s: 0 rss: 72Mb L: 91/120 MS: 1 CrossOver- 00:07:31.203 [2024-12-16 12:27:36.566095] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.203 [2024-12-16 12:27:36.566122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.203 [2024-12-16 12:27:36.566160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.203 [2024-12-16 12:27:36.566175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.203 [2024-12-16 12:27:36.566227] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.203 [2024-12-16 12:27:36.566243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.203 #24 NEW cov: 12467 ft: 14023 corp: 9/866b lim: 120 exec/s: 0 rss: 72Mb L: 85/120 MS: 1 EraseBytes- 00:07:31.203 [2024-12-16 12:27:36.626267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:17216961132006141678 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.203 [2024-12-16 12:27:36.626294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.203 [2024-12-16 12:27:36.626336] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:17216961135462248174 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.203 [2024-12-16 12:27:36.626351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.203 [2024-12-16 12:27:36.626405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.203 [2024-12-16 12:27:36.626421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.203 #25 NEW cov: 12467 ft: 14064 corp: 10/958b lim: 120 exec/s: 0 rss: 73Mb L: 92/120 MS: 1 InsertByte- 00:07:31.203 [2024-12-16 12:27:36.686449] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:17216961132006141678 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.203 [2024-12-16 12:27:36.686477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.203 [2024-12-16 12:27:36.686522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:17216961135462248174 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.203 [2024-12-16 12:27:36.686539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.203 [2024-12-16 12:27:36.686594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:17216961135462248174 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.203 [2024-12-16 12:27:36.686615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.203 #26 NEW cov: 12467 ft: 14118 corp: 11/1047b lim: 120 exec/s: 0 rss: 73Mb L: 89/120 MS: 1 ChangeByte- 00:07:31.203 [2024-12-16 12:27:36.726745] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:17216961132006141678 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.203 [2024-12-16 12:27:36.726774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.203 [2024-12-16 12:27:36.726812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:17216961135462248174 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.203 [2024-12-16 12:27:36.726828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.203 [2024-12-16 12:27:36.726880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.203 [2024-12-16 12:27:36.726896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.203 [2024-12-16 12:27:36.726948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:15914837766678830300 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.203 [2024-12-16 12:27:36.726962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.203 #27 NEW cov: 12467 ft: 14206 corp: 12/1151b lim: 120 exec/s: 0 rss: 73Mb L: 104/120 MS: 1 InsertRepeatedBytes- 00:07:31.204 [2024-12-16 12:27:36.766869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.204 [2024-12-16 12:27:36.766896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.204 [2024-12-16 12:27:36.766942] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.204 [2024-12-16 12:27:36.766959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.204 [2024-12-16 12:27:36.767013] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.204 [2024-12-16 12:27:36.767030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.204 [2024-12-16 12:27:36.767084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.204 [2024-12-16 12:27:36.767100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.463 #28 NEW cov: 12467 ft: 14245 corp: 13/1266b lim: 120 exec/s: 0 rss: 73Mb L: 115/120 MS: 1 EraseBytes- 00:07:31.463 [2024-12-16 12:27:36.806813] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:17216961132006141678 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.463 [2024-12-16 12:27:36.806841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.463 [2024-12-16 12:27:36.806876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.463 [2024-12-16 12:27:36.806893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.463 [2024-12-16 12:27:36.806947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11596468799190114464 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.463 [2024-12-16 12:27:36.806962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.463 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:31.463 #29 NEW cov: 12490 ft: 14282 corp: 14/1355b lim: 120 exec/s: 0 rss: 73Mb L: 89/120 MS: 1 CrossOver- 00:07:31.463 [2024-12-16 12:27:36.846986] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.463 [2024-12-16 12:27:36.847013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.463 [2024-12-16 12:27:36.847058] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092791968 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.463 [2024-12-16 12:27:36.847074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.463 [2024-12-16 12:27:36.847129] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.463 [2024-12-16 12:27:36.847146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.463 #30 NEW cov: 12490 ft: 14307 corp: 15/1440b lim: 120 exec/s: 0 rss: 73Mb L: 85/120 MS: 1 ChangeBit- 00:07:31.463 [2024-12-16 12:27:36.907086] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:17216961132006141678 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.463 [2024-12-16 12:27:36.907115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.463 [2024-12-16 12:27:36.907154] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:17216961135462248174 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.463 [2024-12-16 12:27:36.907169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.463 [2024-12-16 12:27:36.907223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.463 [2024-12-16 12:27:36.907238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.463 #31 NEW cov: 12490 ft: 14328 corp: 16/1532b lim: 120 exec/s: 31 rss: 73Mb L: 92/120 MS: 1 ShuffleBytes- 00:07:31.463 [2024-12-16 12:27:36.967287] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:17216961132006141678 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.463 [2024-12-16 12:27:36.967315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.463 [2024-12-16 12:27:36.967353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:17216961135462248174 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.463 [2024-12-16 12:27:36.967369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.463 [2024-12-16 12:27:36.967425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:17216961135462248174 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.463 [2024-12-16 12:27:36.967442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.463 #32 NEW cov: 12490 ft: 14338 corp: 17/1621b lim: 120 exec/s: 32 rss: 73Mb L: 89/120 MS: 1 ShuffleBytes- 00:07:31.463 [2024-12-16 12:27:37.007666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.463 [2024-12-16 12:27:37.007695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.463 [2024-12-16 12:27:37.007747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.463 [2024-12-16 12:27:37.007764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.463 [2024-12-16 12:27:37.007821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.463 [2024-12-16 12:27:37.007838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.463 [2024-12-16 12:27:37.007889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.463 [2024-12-16 12:27:37.007905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.463 [2024-12-16 12:27:37.007958] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.463 [2024-12-16 12:27:37.007972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:31.722 #33 NEW cov: 12490 ft: 14378 corp: 18/1741b lim: 120 exec/s: 33 rss: 73Mb L: 120/120 MS: 1 ShuffleBytes- 00:07:31.722 [2024-12-16 12:27:37.047501] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.722 [2024-12-16 12:27:37.047530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.722 [2024-12-16 12:27:37.047567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.722 [2024-12-16 12:27:37.047583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.722 [2024-12-16 12:27:37.047640] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.722 [2024-12-16 12:27:37.047655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.722 #34 NEW cov: 12490 ft: 14440 corp: 19/1826b lim: 120 exec/s: 34 rss: 73Mb L: 85/120 MS: 1 CMP- DE: "\377\004_\272\2616?\316"- 00:07:31.722 [2024-12-16 12:27:37.087569] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:17216961132006141678 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.722 [2024-12-16 12:27:37.087596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.722 [2024-12-16 12:27:37.087647] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:17216961135462248174 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.722 [2024-12-16 12:27:37.087663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.722 [2024-12-16 12:27:37.087715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092272800 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.722 [2024-12-16 12:27:37.087733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.722 #35 NEW cov: 12490 ft: 14464 corp: 20/1918b lim: 120 exec/s: 35 rss: 73Mb L: 92/120 MS: 1 ChangeByte- 00:07:31.722 [2024-12-16 12:27:37.127719] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:17216961132006141678 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.722 [2024-12-16 12:27:37.127747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.722 [2024-12-16 12:27:37.127784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.722 [2024-12-16 12:27:37.127804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.722 [2024-12-16 12:27:37.127856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11596468799190114464 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.722 [2024-12-16 12:27:37.127873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.722 #36 NEW cov: 12490 ft: 14491 corp: 21/2007b lim: 120 exec/s: 36 rss: 73Mb L: 89/120 MS: 1 ChangeBinInt- 00:07:31.722 [2024-12-16 12:27:37.187900] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:17216961132006141678 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.722 [2024-12-16 12:27:37.187928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.722 [2024-12-16 12:27:37.187971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.722 [2024-12-16 12:27:37.187987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.723 [2024-12-16 12:27:37.188042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:17149707385035484910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.723 [2024-12-16 12:27:37.188059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.723 #37 NEW cov: 12490 ft: 14524 corp: 22/2086b lim: 120 exec/s: 37 rss: 73Mb L: 79/120 MS: 1 EraseBytes- 00:07:31.723 [2024-12-16 12:27:37.248073] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:17216961132006141678 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.723 [2024-12-16 12:27:37.248101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.723 [2024-12-16 12:27:37.248138] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:17216961135462248174 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.723 [2024-12-16 12:27:37.248155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.723 [2024-12-16 12:27:37.248209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092272800 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.723 [2024-12-16 12:27:37.248226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.982 #38 NEW cov: 12490 ft: 14547 corp: 23/2178b lim: 120 exec/s: 38 rss: 73Mb L: 92/120 MS: 1 ChangeBinInt- 00:07:31.982 [2024-12-16 12:27:37.308184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:17216961132006141678 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.982 [2024-12-16 12:27:37.308212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.982 [2024-12-16 12:27:37.308248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:17216961135462248174 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.982 [2024-12-16 12:27:37.308264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.982 [2024-12-16 12:27:37.308317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:17216961135462248174 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.982 [2024-12-16 12:27:37.308332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.982 #39 NEW cov: 12490 ft: 14571 corp: 24/2267b lim: 120 exec/s: 39 rss: 73Mb L: 89/120 MS: 1 ChangeBit- 00:07:31.982 [2024-12-16 12:27:37.368699] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.982 [2024-12-16 12:27:37.368731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.982 [2024-12-16 12:27:37.368778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.982 [2024-12-16 12:27:37.368794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.982 [2024-12-16 12:27:37.368848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.982 [2024-12-16 12:27:37.368864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.982 [2024-12-16 12:27:37.368916] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.982 [2024-12-16 12:27:37.368933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.982 [2024-12-16 12:27:37.368986] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.982 [2024-12-16 12:27:37.369003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:31.982 #40 NEW cov: 12490 ft: 14608 corp: 25/2387b lim: 120 exec/s: 40 rss: 73Mb L: 120/120 MS: 1 ShuffleBytes- 00:07:31.982 [2024-12-16 12:27:37.428582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:17216961132006141678 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.982 [2024-12-16 12:27:37.428614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.982 [2024-12-16 12:27:37.428662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:17216961135462248174 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.982 [2024-12-16 12:27:37.428679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.982 [2024-12-16 12:27:37.428732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:17216961135462248174 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.982 [2024-12-16 12:27:37.428746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.982 #46 NEW cov: 12490 ft: 14624 corp: 26/2476b lim: 120 exec/s: 46 rss: 74Mb L: 89/120 MS: 1 CopyPart- 00:07:31.982 [2024-12-16 12:27:37.488869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:17216961132006141678 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.982 [2024-12-16 12:27:37.488897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.982 [2024-12-16 12:27:37.488945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:17216961135462248174 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.982 [2024-12-16 12:27:37.488963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.982 [2024-12-16 12:27:37.489016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:17216961135462248174 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.982 [2024-12-16 12:27:37.489030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.982 [2024-12-16 12:27:37.489083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11574427654092267700 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.982 [2024-12-16 12:27:37.489103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.982 #47 NEW cov: 12490 ft: 14659 corp: 27/2593b lim: 120 exec/s: 47 rss: 74Mb L: 117/120 MS: 1 CrossOver- 00:07:32.241 [2024-12-16 12:27:37.548933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.548962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.241 [2024-12-16 12:27:37.549005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092791968 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.549022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.241 [2024-12-16 12:27:37.549077] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.549092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.241 #48 NEW cov: 12490 ft: 14664 corp: 28/2678b lim: 120 exec/s: 48 rss: 74Mb L: 85/120 MS: 1 ChangeBinInt- 00:07:32.241 [2024-12-16 12:27:37.609064] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.609092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.241 [2024-12-16 12:27:37.609129] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.609145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.241 [2024-12-16 12:27:37.609200] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.609215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.241 #49 NEW cov: 12490 ft: 14679 corp: 29/2764b lim: 120 exec/s: 49 rss: 74Mb L: 86/120 MS: 1 InsertByte- 00:07:32.241 [2024-12-16 12:27:37.649525] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.649552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.241 [2024-12-16 12:27:37.649600] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.649620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.241 [2024-12-16 12:27:37.649674] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.649691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.241 [2024-12-16 12:27:37.649744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.649759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.241 [2024-12-16 12:27:37.649812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.649832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:32.241 #50 NEW cov: 12490 ft: 14685 corp: 30/2884b lim: 120 exec/s: 50 rss: 74Mb L: 120/120 MS: 1 ShuffleBytes- 00:07:32.241 [2024-12-16 12:27:37.689342] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:17216961132006141598 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.689368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.241 [2024-12-16 12:27:37.689405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.689421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.241 [2024-12-16 12:27:37.689474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574513751006683296 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.689491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.241 #51 NEW cov: 12490 ft: 14693 corp: 31/2974b lim: 120 exec/s: 51 rss: 74Mb L: 90/120 MS: 1 InsertByte- 00:07:32.241 [2024-12-16 12:27:37.729440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:17216961132006141678 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.729469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.241 [2024-12-16 12:27:37.729506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:17216961135462248174 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.729521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.241 [2024-12-16 12:27:37.729575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.729591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.241 #52 NEW cov: 12490 ft: 14702 corp: 32/3066b lim: 120 exec/s: 52 rss: 74Mb L: 92/120 MS: 1 CrossOver- 00:07:32.241 [2024-12-16 12:27:37.789753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:17216961132006141678 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.789780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.241 [2024-12-16 12:27:37.789827] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:17216961135462248017 len:61167 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.789842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.241 [2024-12-16 12:27:37.789897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.789912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.241 [2024-12-16 12:27:37.789965] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:15914837766678830300 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.241 [2024-12-16 12:27:37.789980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.501 #53 NEW cov: 12490 ft: 14712 corp: 33/3170b lim: 120 exec/s: 53 rss: 74Mb L: 104/120 MS: 1 ChangeByte- 00:07:32.501 [2024-12-16 12:27:37.849932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.501 [2024-12-16 12:27:37.849960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.501 [2024-12-16 12:27:37.850006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.501 [2024-12-16 12:27:37.850022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.501 [2024-12-16 12:27:37.850074] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11575272079022399648 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.501 [2024-12-16 12:27:37.850091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.501 [2024-12-16 12:27:37.850146] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.501 [2024-12-16 12:27:37.850162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.501 #54 NEW cov: 12490 ft: 14716 corp: 34/3288b lim: 120 exec/s: 54 rss: 74Mb L: 118/120 MS: 1 CrossOver- 00:07:32.501 [2024-12-16 12:27:37.910235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.501 [2024-12-16 12:27:37.910263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.501 [2024-12-16 12:27:37.910320] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.501 [2024-12-16 12:27:37.910334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.501 [2024-12-16 12:27:37.910387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.501 [2024-12-16 12:27:37.910405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.501 [2024-12-16 12:27:37.910456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.501 [2024-12-16 12:27:37.910472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.501 [2024-12-16 12:27:37.910527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.501 [2024-12-16 12:27:37.910544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:32.501 #55 NEW cov: 12490 ft: 14719 corp: 35/3408b lim: 120 exec/s: 27 rss: 74Mb L: 120/120 MS: 1 ShuffleBytes- 00:07:32.501 #55 DONE cov: 12490 ft: 14719 corp: 35/3408b lim: 120 exec/s: 27 rss: 74Mb 00:07:32.501 ###### Recommended dictionary. ###### 00:07:32.501 "\377\004_\272\2616?\316" # Uses: 0 00:07:32.501 ###### End of recommended dictionary. ###### 00:07:32.501 Done 55 runs in 2 second(s) 00:07:32.501 12:27:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:07:32.501 12:27:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:32.501 12:27:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.501 12:27:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:32.501 12:27:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:32.501 12:27:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:32.501 12:27:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:32.501 12:27:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:32.501 12:27:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:32.501 12:27:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:32.501 12:27:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:32.501 12:27:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:07:32.501 12:27:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:07:32.501 12:27:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:32.501 12:27:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:32.501 12:27:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:32.501 12:27:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:32.501 12:27:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:32.501 12:27:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:07:32.761 [2024-12-16 12:27:38.084232] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:32.761 [2024-12-16 12:27:38.084298] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid991965 ] 00:07:32.761 [2024-12-16 12:27:38.267857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.761 [2024-12-16 12:27:38.300807] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.021 [2024-12-16 12:27:38.360163] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.021 [2024-12-16 12:27:38.376464] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:33.021 INFO: Running with entropic power schedule (0xFF, 100). 00:07:33.021 INFO: Seed: 3811663975 00:07:33.021 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:33.021 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:33.021 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:33.021 INFO: A corpus is not provided, starting from an empty corpus 00:07:33.021 #2 INITED exec/s: 0 rss: 65Mb 00:07:33.021 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:33.021 This may also happen if the target rejected all inputs we tried so far 00:07:33.021 [2024-12-16 12:27:38.441982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.021 [2024-12-16 12:27:38.442010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.021 [2024-12-16 12:27:38.442053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.021 [2024-12-16 12:27:38.442068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.021 [2024-12-16 12:27:38.442118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.021 [2024-12-16 12:27:38.442132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.021 [2024-12-16 12:27:38.442186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:33.021 [2024-12-16 12:27:38.442201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.280 NEW_FUNC[1/716]: 0x459398 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:33.281 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:33.281 #11 NEW cov: 12188 ft: 12189 corp: 2/88b lim: 100 exec/s: 0 rss: 72Mb L: 87/87 MS: 4 CopyPart-EraseBytes-ChangeBit-InsertRepeatedBytes- 00:07:33.281 [2024-12-16 12:27:38.783015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.281 [2024-12-16 12:27:38.783068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.281 [2024-12-16 12:27:38.783141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.281 [2024-12-16 12:27:38.783166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.281 [2024-12-16 12:27:38.783234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.281 [2024-12-16 12:27:38.783257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.281 [2024-12-16 12:27:38.783326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:33.281 [2024-12-16 12:27:38.783350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.281 #12 NEW cov: 12318 ft: 12890 corp: 3/175b lim: 100 exec/s: 0 rss: 72Mb L: 87/87 MS: 1 ShuffleBytes- 00:07:33.540 [2024-12-16 12:27:38.852918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.540 [2024-12-16 12:27:38.852946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.540 [2024-12-16 12:27:38.852980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.540 [2024-12-16 12:27:38.852995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.540 [2024-12-16 12:27:38.853046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.540 [2024-12-16 12:27:38.853060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.540 #33 NEW cov: 12324 ft: 13291 corp: 4/238b lim: 100 exec/s: 0 rss: 72Mb L: 63/87 MS: 1 EraseBytes- 00:07:33.540 [2024-12-16 12:27:38.912998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.540 [2024-12-16 12:27:38.913027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.540 [2024-12-16 12:27:38.913061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.540 [2024-12-16 12:27:38.913077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.540 [2024-12-16 12:27:38.913126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.540 [2024-12-16 12:27:38.913141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.540 #34 NEW cov: 12409 ft: 13678 corp: 5/301b lim: 100 exec/s: 0 rss: 72Mb L: 63/87 MS: 1 CopyPart- 00:07:33.540 [2024-12-16 12:27:38.973294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.540 [2024-12-16 12:27:38.973325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.540 [2024-12-16 12:27:38.973360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.540 [2024-12-16 12:27:38.973374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.540 [2024-12-16 12:27:38.973423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.540 [2024-12-16 12:27:38.973438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.540 [2024-12-16 12:27:38.973488] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:33.540 [2024-12-16 12:27:38.973502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.540 #35 NEW cov: 12409 ft: 13846 corp: 6/388b lim: 100 exec/s: 0 rss: 72Mb L: 87/87 MS: 1 CMP- DE: "\377\007"- 00:07:33.540 [2024-12-16 12:27:39.013378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.540 [2024-12-16 12:27:39.013404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.540 [2024-12-16 12:27:39.013449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.541 [2024-12-16 12:27:39.013463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.541 [2024-12-16 12:27:39.013514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.541 [2024-12-16 12:27:39.013528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.541 [2024-12-16 12:27:39.013576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:33.541 [2024-12-16 12:27:39.013590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.541 #41 NEW cov: 12409 ft: 13936 corp: 7/476b lim: 100 exec/s: 0 rss: 72Mb L: 88/88 MS: 1 InsertByte- 00:07:33.541 [2024-12-16 12:27:39.053507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.541 [2024-12-16 12:27:39.053533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.541 [2024-12-16 12:27:39.053578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.541 [2024-12-16 12:27:39.053592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.541 [2024-12-16 12:27:39.053644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.541 [2024-12-16 12:27:39.053658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.541 [2024-12-16 12:27:39.053708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:33.541 [2024-12-16 12:27:39.053722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.541 #42 NEW cov: 12409 ft: 14045 corp: 8/563b lim: 100 exec/s: 0 rss: 72Mb L: 87/88 MS: 1 ChangeByte- 00:07:33.541 [2024-12-16 12:27:39.093527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.541 [2024-12-16 12:27:39.093553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.541 [2024-12-16 12:27:39.093591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.541 [2024-12-16 12:27:39.093605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.541 [2024-12-16 12:27:39.093662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.541 [2024-12-16 12:27:39.093676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.800 #43 NEW cov: 12409 ft: 14075 corp: 9/626b lim: 100 exec/s: 0 rss: 72Mb L: 63/88 MS: 1 ChangeByte- 00:07:33.800 [2024-12-16 12:27:39.133814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.800 [2024-12-16 12:27:39.133841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.800 [2024-12-16 12:27:39.133887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.800 [2024-12-16 12:27:39.133902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.800 [2024-12-16 12:27:39.133952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.800 [2024-12-16 12:27:39.133966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.800 [2024-12-16 12:27:39.134016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:33.800 [2024-12-16 12:27:39.134030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.800 #49 NEW cov: 12409 ft: 14100 corp: 10/712b lim: 100 exec/s: 0 rss: 72Mb L: 86/88 MS: 1 EraseBytes- 00:07:33.800 [2024-12-16 12:27:39.193941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.800 [2024-12-16 12:27:39.193968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.800 [2024-12-16 12:27:39.194012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.800 [2024-12-16 12:27:39.194026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.800 [2024-12-16 12:27:39.194076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.800 [2024-12-16 12:27:39.194091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.800 [2024-12-16 12:27:39.194143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:33.800 [2024-12-16 12:27:39.194157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.800 #50 NEW cov: 12409 ft: 14133 corp: 11/801b lim: 100 exec/s: 0 rss: 73Mb L: 89/89 MS: 1 PersAutoDict- DE: "\377\007"- 00:07:33.800 [2024-12-16 12:27:39.254027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.800 [2024-12-16 12:27:39.254053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.800 [2024-12-16 12:27:39.254090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.800 [2024-12-16 12:27:39.254105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.800 [2024-12-16 12:27:39.254153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.800 [2024-12-16 12:27:39.254169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.800 #51 NEW cov: 12409 ft: 14152 corp: 12/873b lim: 100 exec/s: 0 rss: 73Mb L: 72/89 MS: 1 EraseBytes- 00:07:33.800 [2024-12-16 12:27:39.294189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.800 [2024-12-16 12:27:39.294218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.800 [2024-12-16 12:27:39.294253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.800 [2024-12-16 12:27:39.294267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.800 [2024-12-16 12:27:39.294317] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.800 [2024-12-16 12:27:39.294333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.801 [2024-12-16 12:27:39.294384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:33.801 [2024-12-16 12:27:39.294397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.801 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:33.801 #52 NEW cov: 12432 ft: 14224 corp: 13/960b lim: 100 exec/s: 0 rss: 73Mb L: 87/89 MS: 1 ShuffleBytes- 00:07:33.801 [2024-12-16 12:27:39.334061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.801 [2024-12-16 12:27:39.334086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.801 [2024-12-16 12:27:39.334128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.801 [2024-12-16 12:27:39.334142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.060 #58 NEW cov: 12432 ft: 14547 corp: 14/1016b lim: 100 exec/s: 0 rss: 73Mb L: 56/89 MS: 1 EraseBytes- 00:07:34.060 [2024-12-16 12:27:39.394251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.060 [2024-12-16 12:27:39.394277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.060 [2024-12-16 12:27:39.394318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.060 [2024-12-16 12:27:39.394334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.060 #59 NEW cov: 12432 ft: 14584 corp: 15/1071b lim: 100 exec/s: 59 rss: 73Mb L: 55/89 MS: 1 EraseBytes- 00:07:34.060 [2024-12-16 12:27:39.454664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.060 [2024-12-16 12:27:39.454690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.060 [2024-12-16 12:27:39.454735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.060 [2024-12-16 12:27:39.454750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.060 [2024-12-16 12:27:39.454801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.060 [2024-12-16 12:27:39.454816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.060 [2024-12-16 12:27:39.454867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.060 [2024-12-16 12:27:39.454882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.060 #60 NEW cov: 12432 ft: 14609 corp: 16/1160b lim: 100 exec/s: 60 rss: 73Mb L: 89/89 MS: 1 ChangeBit- 00:07:34.060 [2024-12-16 12:27:39.514831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.060 [2024-12-16 12:27:39.514857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.060 [2024-12-16 12:27:39.514904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.060 [2024-12-16 12:27:39.514918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.060 [2024-12-16 12:27:39.514972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.060 [2024-12-16 12:27:39.514986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.060 [2024-12-16 12:27:39.515038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.060 [2024-12-16 12:27:39.515052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.060 #61 NEW cov: 12432 ft: 14617 corp: 17/1249b lim: 100 exec/s: 61 rss: 73Mb L: 89/89 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:07:34.060 [2024-12-16 12:27:39.554924] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.060 [2024-12-16 12:27:39.554951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.060 [2024-12-16 12:27:39.554994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.060 [2024-12-16 12:27:39.555009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.060 [2024-12-16 12:27:39.555060] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.060 [2024-12-16 12:27:39.555076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.060 [2024-12-16 12:27:39.555129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.060 [2024-12-16 12:27:39.555143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.060 #62 NEW cov: 12432 ft: 14626 corp: 18/1336b lim: 100 exec/s: 62 rss: 73Mb L: 87/89 MS: 1 ChangeBinInt- 00:07:34.060 [2024-12-16 12:27:39.594890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.060 [2024-12-16 12:27:39.594917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.060 [2024-12-16 12:27:39.594954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.060 [2024-12-16 12:27:39.594968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.060 [2024-12-16 12:27:39.595020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.060 [2024-12-16 12:27:39.595034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.320 #68 NEW cov: 12432 ft: 14681 corp: 19/1405b lim: 100 exec/s: 68 rss: 73Mb L: 69/89 MS: 1 InsertRepeatedBytes- 00:07:34.320 [2024-12-16 12:27:39.655101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.320 [2024-12-16 12:27:39.655126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.320 [2024-12-16 12:27:39.655161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.320 [2024-12-16 12:27:39.655175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.320 [2024-12-16 12:27:39.655226] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.320 [2024-12-16 12:27:39.655240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.320 #69 NEW cov: 12432 ft: 14734 corp: 20/1468b lim: 100 exec/s: 69 rss: 73Mb L: 63/89 MS: 1 CopyPart- 00:07:34.320 [2024-12-16 12:27:39.695328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.320 [2024-12-16 12:27:39.695354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.320 [2024-12-16 12:27:39.695399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.320 [2024-12-16 12:27:39.695413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.320 [2024-12-16 12:27:39.695464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.320 [2024-12-16 12:27:39.695479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.320 [2024-12-16 12:27:39.695530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.320 [2024-12-16 12:27:39.695543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.320 #70 NEW cov: 12432 ft: 14754 corp: 21/1555b lim: 100 exec/s: 70 rss: 73Mb L: 87/89 MS: 1 ChangeBinInt- 00:07:34.320 [2024-12-16 12:27:39.735306] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.320 [2024-12-16 12:27:39.735331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.320 [2024-12-16 12:27:39.735366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.320 [2024-12-16 12:27:39.735380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.320 [2024-12-16 12:27:39.735430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.320 [2024-12-16 12:27:39.735444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.320 #71 NEW cov: 12432 ft: 14799 corp: 22/1626b lim: 100 exec/s: 71 rss: 73Mb L: 71/89 MS: 1 PersAutoDict- DE: "\377\007"- 00:07:34.320 [2024-12-16 12:27:39.795394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.320 [2024-12-16 12:27:39.795419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.320 [2024-12-16 12:27:39.795455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.320 [2024-12-16 12:27:39.795469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.320 #72 NEW cov: 12432 ft: 14806 corp: 23/1682b lim: 100 exec/s: 72 rss: 73Mb L: 56/89 MS: 1 ShuffleBytes- 00:07:34.320 [2024-12-16 12:27:39.855764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.320 [2024-12-16 12:27:39.855791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.320 [2024-12-16 12:27:39.855839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.320 [2024-12-16 12:27:39.855855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.320 [2024-12-16 12:27:39.855908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.320 [2024-12-16 12:27:39.855923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.320 [2024-12-16 12:27:39.855974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.320 [2024-12-16 12:27:39.855992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.320 #73 NEW cov: 12432 ft: 14820 corp: 24/1770b lim: 100 exec/s: 73 rss: 73Mb L: 88/89 MS: 1 ChangeBinInt- 00:07:34.580 [2024-12-16 12:27:39.895618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.580 [2024-12-16 12:27:39.895646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.580 [2024-12-16 12:27:39.895682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.580 [2024-12-16 12:27:39.895694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.580 #74 NEW cov: 12432 ft: 14896 corp: 25/1820b lim: 100 exec/s: 74 rss: 73Mb L: 50/89 MS: 1 EraseBytes- 00:07:34.580 [2024-12-16 12:27:39.956067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.580 [2024-12-16 12:27:39.956093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.580 [2024-12-16 12:27:39.956138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.580 [2024-12-16 12:27:39.956152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.580 [2024-12-16 12:27:39.956203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.580 [2024-12-16 12:27:39.956217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.580 [2024-12-16 12:27:39.956267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.580 [2024-12-16 12:27:39.956282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.580 #75 NEW cov: 12432 ft: 14905 corp: 26/1905b lim: 100 exec/s: 75 rss: 74Mb L: 85/89 MS: 1 CrossOver- 00:07:34.580 [2024-12-16 12:27:40.016252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.580 [2024-12-16 12:27:40.016279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.580 [2024-12-16 12:27:40.016325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.580 [2024-12-16 12:27:40.016340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.580 [2024-12-16 12:27:40.016390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.580 [2024-12-16 12:27:40.016405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.580 [2024-12-16 12:27:40.016458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.580 [2024-12-16 12:27:40.016472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.580 #76 NEW cov: 12432 ft: 14962 corp: 27/1996b lim: 100 exec/s: 76 rss: 74Mb L: 91/91 MS: 1 CopyPart- 00:07:34.580 [2024-12-16 12:27:40.056321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.580 [2024-12-16 12:27:40.056350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.580 [2024-12-16 12:27:40.056393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.580 [2024-12-16 12:27:40.056408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.580 [2024-12-16 12:27:40.056459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.580 [2024-12-16 12:27:40.056477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.580 [2024-12-16 12:27:40.056528] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.580 [2024-12-16 12:27:40.056542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.580 #77 NEW cov: 12432 ft: 15030 corp: 28/2083b lim: 100 exec/s: 77 rss: 74Mb L: 87/91 MS: 1 ChangeASCIIInt- 00:07:34.580 [2024-12-16 12:27:40.096432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.580 [2024-12-16 12:27:40.096460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.580 [2024-12-16 12:27:40.096497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.580 [2024-12-16 12:27:40.096513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.580 [2024-12-16 12:27:40.096563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.580 [2024-12-16 12:27:40.096579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.580 [2024-12-16 12:27:40.096631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.580 [2024-12-16 12:27:40.096646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.580 #78 NEW cov: 12432 ft: 15043 corp: 29/2178b lim: 100 exec/s: 78 rss: 74Mb L: 95/95 MS: 1 CopyPart- 00:07:34.580 [2024-12-16 12:27:40.136592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.580 [2024-12-16 12:27:40.136625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.580 [2024-12-16 12:27:40.136673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.580 [2024-12-16 12:27:40.136687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.580 [2024-12-16 12:27:40.136739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.581 [2024-12-16 12:27:40.136754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.581 [2024-12-16 12:27:40.136806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.581 [2024-12-16 12:27:40.136820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.840 #79 NEW cov: 12432 ft: 15080 corp: 30/2263b lim: 100 exec/s: 79 rss: 74Mb L: 85/95 MS: 1 ChangeBit- 00:07:34.840 [2024-12-16 12:27:40.196535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.840 [2024-12-16 12:27:40.196563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.840 [2024-12-16 12:27:40.196617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.840 [2024-12-16 12:27:40.196632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.840 #85 NEW cov: 12432 ft: 15104 corp: 31/2313b lim: 100 exec/s: 85 rss: 74Mb L: 50/95 MS: 1 CopyPart- 00:07:34.840 [2024-12-16 12:27:40.256876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.840 [2024-12-16 12:27:40.256904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.840 [2024-12-16 12:27:40.256941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.840 [2024-12-16 12:27:40.256954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.840 [2024-12-16 12:27:40.257006] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.840 [2024-12-16 12:27:40.257021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.840 #86 NEW cov: 12432 ft: 15130 corp: 32/2384b lim: 100 exec/s: 86 rss: 74Mb L: 71/95 MS: 1 ShuffleBytes- 00:07:34.840 [2024-12-16 12:27:40.316905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.840 [2024-12-16 12:27:40.316932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.840 [2024-12-16 12:27:40.316967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.840 [2024-12-16 12:27:40.316982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.840 #87 NEW cov: 12432 ft: 15138 corp: 33/2425b lim: 100 exec/s: 87 rss: 74Mb L: 41/95 MS: 1 EraseBytes- 00:07:34.840 [2024-12-16 12:27:40.356935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.840 [2024-12-16 12:27:40.356963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.840 [2024-12-16 12:27:40.357004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.840 [2024-12-16 12:27:40.357019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.840 #93 NEW cov: 12432 ft: 15184 corp: 34/2480b lim: 100 exec/s: 93 rss: 74Mb L: 55/95 MS: 1 ChangeByte- 00:07:35.101 [2024-12-16 12:27:40.417273] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:35.101 [2024-12-16 12:27:40.417300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.101 [2024-12-16 12:27:40.417337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:35.101 [2024-12-16 12:27:40.417352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.101 [2024-12-16 12:27:40.417404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:35.101 [2024-12-16 12:27:40.417419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.101 #96 NEW cov: 12432 ft: 15197 corp: 35/2556b lim: 100 exec/s: 48 rss: 74Mb L: 76/95 MS: 3 CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:07:35.101 #96 DONE cov: 12432 ft: 15197 corp: 35/2556b lim: 100 exec/s: 48 rss: 74Mb 00:07:35.101 ###### Recommended dictionary. ###### 00:07:35.101 "\377\007" # Uses: 3 00:07:35.101 "\377\377\377\377\377\377\377\377" # Uses: 0 00:07:35.101 ###### End of recommended dictionary. ###### 00:07:35.101 Done 96 runs in 2 second(s) 00:07:35.101 12:27:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:07:35.101 12:27:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:35.101 12:27:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:35.101 12:27:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:35.101 12:27:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:35.101 12:27:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:35.101 12:27:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:35.101 12:27:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:35.101 12:27:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:35.101 12:27:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:35.101 12:27:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:35.101 12:27:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:07:35.101 12:27:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:07:35.101 12:27:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:35.101 12:27:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:35.101 12:27:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:35.101 12:27:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:35.101 12:27:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:35.101 12:27:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:07:35.101 [2024-12-16 12:27:40.591375] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:35.101 [2024-12-16 12:27:40.591443] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid992471 ] 00:07:35.360 [2024-12-16 12:27:40.776776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.360 [2024-12-16 12:27:40.809896] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.360 [2024-12-16 12:27:40.869131] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.360 [2024-12-16 12:27:40.885443] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:35.360 INFO: Running with entropic power schedule (0xFF, 100). 00:07:35.360 INFO: Seed: 2024674171 00:07:35.619 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:35.619 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:35.619 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:35.619 INFO: A corpus is not provided, starting from an empty corpus 00:07:35.619 #2 INITED exec/s: 0 rss: 66Mb 00:07:35.619 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:35.619 This may also happen if the target rejected all inputs we tried so far 00:07:35.619 [2024-12-16 12:27:40.954984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:35.619 [2024-12-16 12:27:40.955020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.619 [2024-12-16 12:27:40.955117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:35.619 [2024-12-16 12:27:40.955140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.619 [2024-12-16 12:27:40.955253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:35.619 [2024-12-16 12:27:40.955274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.878 NEW_FUNC[1/716]: 0x45c358 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:35.878 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:35.878 #4 NEW cov: 12184 ft: 12185 corp: 2/37b lim: 50 exec/s: 0 rss: 72Mb L: 36/36 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:35.878 [2024-12-16 12:27:41.275156] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:35.878 [2024-12-16 12:27:41.275192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.878 [2024-12-16 12:27:41.275243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:35.878 [2024-12-16 12:27:41.275260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.878 [2024-12-16 12:27:41.275314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:35.878 [2024-12-16 12:27:41.275328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.878 #5 NEW cov: 12297 ft: 12960 corp: 3/73b lim: 50 exec/s: 0 rss: 72Mb L: 36/36 MS: 1 CopyPart- 00:07:35.878 [2024-12-16 12:27:41.335218] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:35.878 [2024-12-16 12:27:41.335248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.878 [2024-12-16 12:27:41.335291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:35.878 [2024-12-16 12:27:41.335307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.878 #6 NEW cov: 12303 ft: 13406 corp: 4/100b lim: 50 exec/s: 0 rss: 72Mb L: 27/36 MS: 1 EraseBytes- 00:07:35.878 [2024-12-16 12:27:41.395301] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:35.878 [2024-12-16 12:27:41.395330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.878 [2024-12-16 12:27:41.395381] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:12801 00:07:35.878 [2024-12-16 12:27:41.395398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.878 #7 NEW cov: 12388 ft: 13627 corp: 5/128b lim: 50 exec/s: 0 rss: 72Mb L: 28/36 MS: 1 InsertByte- 00:07:36.137 [2024-12-16 12:27:41.455483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:36.138 [2024-12-16 12:27:41.455511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.138 [2024-12-16 12:27:41.455554] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:69524319247532032 len:12801 00:07:36.138 [2024-12-16 12:27:41.455570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.138 #8 NEW cov: 12388 ft: 13741 corp: 6/156b lim: 50 exec/s: 0 rss: 72Mb L: 28/36 MS: 1 ChangeBinInt- 00:07:36.138 [2024-12-16 12:27:41.515680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:45079976738816 len:1 00:07:36.138 [2024-12-16 12:27:41.515708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.138 [2024-12-16 12:27:41.515750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:69524319247532032 len:12801 00:07:36.138 [2024-12-16 12:27:41.515766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.138 #9 NEW cov: 12388 ft: 13797 corp: 7/184b lim: 50 exec/s: 0 rss: 72Mb L: 28/36 MS: 1 ChangeByte- 00:07:36.138 [2024-12-16 12:27:41.575807] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:36.138 [2024-12-16 12:27:41.575834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.138 [2024-12-16 12:27:41.575873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:65 00:07:36.138 [2024-12-16 12:27:41.575889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.138 #10 NEW cov: 12388 ft: 13919 corp: 8/211b lim: 50 exec/s: 0 rss: 72Mb L: 27/36 MS: 1 ChangeBit- 00:07:36.138 [2024-12-16 12:27:41.616060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:36.138 [2024-12-16 12:27:41.616089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.138 [2024-12-16 12:27:41.616126] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:36.138 [2024-12-16 12:27:41.616142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.138 [2024-12-16 12:27:41.616199] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:69524319247532032 len:12801 00:07:36.138 [2024-12-16 12:27:41.616215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.138 #11 NEW cov: 12388 ft: 13989 corp: 9/249b lim: 50 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 CopyPart- 00:07:36.138 [2024-12-16 12:27:41.656045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:45079976738816 len:1 00:07:36.138 [2024-12-16 12:27:41.656074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.138 [2024-12-16 12:27:41.656117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:69524319255920640 len:12801 00:07:36.138 [2024-12-16 12:27:41.656134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.138 #12 NEW cov: 12388 ft: 14015 corp: 10/277b lim: 50 exec/s: 0 rss: 72Mb L: 28/38 MS: 1 ChangeBit- 00:07:36.397 [2024-12-16 12:27:41.716223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:36.397 [2024-12-16 12:27:41.716250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.397 [2024-12-16 12:27:41.716293] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:36.398 [2024-12-16 12:27:41.716308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.398 #13 NEW cov: 12388 ft: 14046 corp: 11/301b lim: 50 exec/s: 0 rss: 72Mb L: 24/38 MS: 1 EraseBytes- 00:07:36.398 [2024-12-16 12:27:41.756349] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:45079976738816 len:1 00:07:36.398 [2024-12-16 12:27:41.756378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.398 [2024-12-16 12:27:41.756434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17795410977601093632 len:12801 00:07:36.398 [2024-12-16 12:27:41.756450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.398 #14 NEW cov: 12388 ft: 14076 corp: 12/329b lim: 50 exec/s: 0 rss: 72Mb L: 28/38 MS: 1 ChangeBinInt- 00:07:36.398 [2024-12-16 12:27:41.796443] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4398046511104 len:1 00:07:36.398 [2024-12-16 12:27:41.796471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.398 [2024-12-16 12:27:41.796525] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:69524319247532032 len:12801 00:07:36.398 [2024-12-16 12:27:41.796541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.398 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:36.398 #15 NEW cov: 12411 ft: 14128 corp: 13/357b lim: 50 exec/s: 0 rss: 73Mb L: 28/38 MS: 1 ChangeBit- 00:07:36.398 [2024-12-16 12:27:41.836558] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:45079976738816 len:1 00:07:36.398 [2024-12-16 12:27:41.836585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.398 [2024-12-16 12:27:41.836640] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:69524319255920640 len:63233 00:07:36.398 [2024-12-16 12:27:41.836656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.398 #16 NEW cov: 12411 ft: 14143 corp: 14/385b lim: 50 exec/s: 0 rss: 73Mb L: 28/38 MS: 1 CopyPart- 00:07:36.398 [2024-12-16 12:27:41.896818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:45079976738816 len:1 00:07:36.398 [2024-12-16 12:27:41.896846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.398 [2024-12-16 12:27:41.896883] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:69681059784032256 len:36495 00:07:36.398 [2024-12-16 12:27:41.896900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.398 [2024-12-16 12:27:41.896957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:12800 len:1 00:07:36.398 [2024-12-16 12:27:41.896975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.398 #17 NEW cov: 12411 ft: 14174 corp: 15/417b lim: 50 exec/s: 0 rss: 73Mb L: 32/38 MS: 1 InsertRepeatedBytes- 00:07:36.398 [2024-12-16 12:27:41.936940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:36.398 [2024-12-16 12:27:41.936968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.398 [2024-12-16 12:27:41.937004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:36.398 [2024-12-16 12:27:41.937021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.398 [2024-12-16 12:27:41.937077] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:36.398 [2024-12-16 12:27:41.937092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.398 #18 NEW cov: 12411 ft: 14185 corp: 16/453b lim: 50 exec/s: 18 rss: 73Mb L: 36/38 MS: 1 ShuffleBytes- 00:07:36.657 [2024-12-16 12:27:41.977085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:36.657 [2024-12-16 12:27:41.977114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.657 [2024-12-16 12:27:41.977149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:36.657 [2024-12-16 12:27:41.977163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.657 [2024-12-16 12:27:41.977220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:36.657 [2024-12-16 12:27:41.977236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.657 #19 NEW cov: 12411 ft: 14224 corp: 17/488b lim: 50 exec/s: 19 rss: 73Mb L: 35/38 MS: 1 EraseBytes- 00:07:36.657 [2024-12-16 12:27:42.017316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:45079976738816 len:1 00:07:36.657 [2024-12-16 12:27:42.017344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.657 [2024-12-16 12:27:42.017383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:69525414464192512 len:65536 00:07:36.657 [2024-12-16 12:27:42.017398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.657 [2024-12-16 12:27:42.017451] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:36.657 [2024-12-16 12:27:42.017467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.657 [2024-12-16 12:27:42.017522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446462817776173055 len:1 00:07:36.657 [2024-12-16 12:27:42.017537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.657 #20 NEW cov: 12411 ft: 14513 corp: 18/535b lim: 50 exec/s: 20 rss: 73Mb L: 47/47 MS: 1 InsertRepeatedBytes- 00:07:36.657 [2024-12-16 12:27:42.057190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:45079976738816 len:234 00:07:36.658 [2024-12-16 12:27:42.057218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.658 [2024-12-16 12:27:42.057253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:271579372093440 len:51 00:07:36.658 [2024-12-16 12:27:42.057269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.658 #21 NEW cov: 12411 ft: 14526 corp: 19/564b lim: 50 exec/s: 21 rss: 73Mb L: 29/47 MS: 1 InsertByte- 00:07:36.658 [2024-12-16 12:27:42.097532] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:45079976738816 len:1 00:07:36.658 [2024-12-16 12:27:42.097560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.658 [2024-12-16 12:27:42.097604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:129 00:07:36.658 [2024-12-16 12:27:42.097625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.658 [2024-12-16 12:27:42.097681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:54975581389047 len:1 00:07:36.658 [2024-12-16 12:27:42.097697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.658 [2024-12-16 12:27:42.097751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:17798225942116564992 len:1 00:07:36.658 [2024-12-16 12:27:42.097768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.658 #22 NEW cov: 12411 ft: 14544 corp: 20/611b lim: 50 exec/s: 22 rss: 73Mb L: 47/47 MS: 1 CrossOver- 00:07:36.658 [2024-12-16 12:27:42.137679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:45079976738816 len:1 00:07:36.658 [2024-12-16 12:27:42.137709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.658 [2024-12-16 12:27:42.137750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:69525414464192512 len:65536 00:07:36.658 [2024-12-16 12:27:42.137767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.658 [2024-12-16 12:27:42.137823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:36.658 [2024-12-16 12:27:42.137839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.658 [2024-12-16 12:27:42.137897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446462817776173055 len:1 00:07:36.658 [2024-12-16 12:27:42.137913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.658 #23 NEW cov: 12411 ft: 14571 corp: 21/659b lim: 50 exec/s: 23 rss: 73Mb L: 48/48 MS: 1 InsertByte- 00:07:36.658 [2024-12-16 12:27:42.197747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:45079976738816 len:1 00:07:36.658 [2024-12-16 12:27:42.197776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.658 [2024-12-16 12:27:42.197814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:69681059784032256 len:36495 00:07:36.658 [2024-12-16 12:27:42.197830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.658 [2024-12-16 12:27:42.197885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:78336 len:1 00:07:36.658 [2024-12-16 12:27:42.197902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.917 #24 NEW cov: 12411 ft: 14658 corp: 22/691b lim: 50 exec/s: 24 rss: 73Mb L: 32/48 MS: 1 ChangeBit- 00:07:36.917 [2024-12-16 12:27:42.257765] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:42 00:07:36.917 [2024-12-16 12:27:42.257793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.917 [2024-12-16 12:27:42.257832] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:36.917 [2024-12-16 12:27:42.257848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.917 #25 NEW cov: 12411 ft: 14672 corp: 23/717b lim: 50 exec/s: 25 rss: 73Mb L: 26/48 MS: 1 CrossOver- 00:07:36.917 [2024-12-16 12:27:42.318048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2893606912849346560 len:10281 00:07:36.917 [2024-12-16 12:27:42.318075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.917 [2024-12-16 12:27:42.318110] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11540474045136896 len:1 00:07:36.917 [2024-12-16 12:27:42.318126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.917 [2024-12-16 12:27:42.318181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:36.917 [2024-12-16 12:27:42.318197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.917 #26 NEW cov: 12411 ft: 14692 corp: 24/749b lim: 50 exec/s: 26 rss: 73Mb L: 32/48 MS: 1 InsertRepeatedBytes- 00:07:36.917 [2024-12-16 12:27:42.377971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:36.917 [2024-12-16 12:27:42.378001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.917 #27 NEW cov: 12411 ft: 14980 corp: 25/766b lim: 50 exec/s: 27 rss: 73Mb L: 17/48 MS: 1 EraseBytes- 00:07:36.917 [2024-12-16 12:27:42.418182] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:45079976738816 len:234 00:07:36.917 [2024-12-16 12:27:42.418210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.917 [2024-12-16 12:27:42.418253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:271579372093440 len:51 00:07:36.917 [2024-12-16 12:27:42.418270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.917 #28 NEW cov: 12411 ft: 15063 corp: 26/795b lim: 50 exec/s: 28 rss: 73Mb L: 29/48 MS: 1 CrossOver- 00:07:36.917 [2024-12-16 12:27:42.478616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:45079976738816 len:1 00:07:36.917 [2024-12-16 12:27:42.478644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.917 [2024-12-16 12:27:42.478697] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:35184372088832 len:129 00:07:36.917 [2024-12-16 12:27:42.478713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.917 [2024-12-16 12:27:42.478768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:54975581389047 len:1 00:07:36.917 [2024-12-16 12:27:42.478784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.917 [2024-12-16 12:27:42.478839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:17798225942116564992 len:1 00:07:36.917 [2024-12-16 12:27:42.478855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:37.177 #29 NEW cov: 12411 ft: 15097 corp: 27/842b lim: 50 exec/s: 29 rss: 73Mb L: 47/48 MS: 1 ChangeBit- 00:07:37.177 [2024-12-16 12:27:42.538667] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:134217728 len:1 00:07:37.177 [2024-12-16 12:27:42.538694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.177 [2024-12-16 12:27:42.538732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:37.177 [2024-12-16 12:27:42.538748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.177 [2024-12-16 12:27:42.538803] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:37.177 [2024-12-16 12:27:42.538820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.177 #30 NEW cov: 12411 ft: 15122 corp: 28/878b lim: 50 exec/s: 30 rss: 73Mb L: 36/48 MS: 1 ChangeBit- 00:07:37.177 [2024-12-16 12:27:42.578672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:37.177 [2024-12-16 12:27:42.578699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.177 [2024-12-16 12:27:42.578740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:69524319247532032 len:12801 00:07:37.177 [2024-12-16 12:27:42.578756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.177 #31 NEW cov: 12411 ft: 15132 corp: 29/906b lim: 50 exec/s: 31 rss: 73Mb L: 28/48 MS: 1 ShuffleBytes- 00:07:37.177 [2024-12-16 12:27:42.618661] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:37.177 [2024-12-16 12:27:42.618688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.177 #32 NEW cov: 12411 ft: 15184 corp: 30/917b lim: 50 exec/s: 32 rss: 73Mb L: 11/48 MS: 1 CrossOver- 00:07:37.177 [2024-12-16 12:27:42.678934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:37.177 [2024-12-16 12:27:42.678962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.177 [2024-12-16 12:27:42.678998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:69524319247532032 len:12801 00:07:37.177 [2024-12-16 12:27:42.679014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.177 #33 NEW cov: 12411 ft: 15188 corp: 31/945b lim: 50 exec/s: 33 rss: 73Mb L: 28/48 MS: 1 ShuffleBytes- 00:07:37.177 [2024-12-16 12:27:42.719108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:37.177 [2024-12-16 12:27:42.719136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.177 [2024-12-16 12:27:42.719171] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:69524322871410688 len:12801 00:07:37.177 [2024-12-16 12:27:42.719187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.437 #34 NEW cov: 12411 ft: 15271 corp: 32/973b lim: 50 exec/s: 34 rss: 74Mb L: 28/48 MS: 1 ChangeByte- 00:07:37.437 [2024-12-16 12:27:42.779139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:1 00:07:37.437 [2024-12-16 12:27:42.779168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.437 #35 NEW cov: 12411 ft: 15297 corp: 33/984b lim: 50 exec/s: 35 rss: 74Mb L: 11/48 MS: 1 ChangeBit- 00:07:37.437 [2024-12-16 12:27:42.839624] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:45079976738816 len:1 00:07:37.437 [2024-12-16 12:27:42.839651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.437 [2024-12-16 12:27:42.839701] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:271579374747648 len:65536 00:07:37.437 [2024-12-16 12:27:42.839718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.437 [2024-12-16 12:27:42.839772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:37.437 [2024-12-16 12:27:42.839789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.437 [2024-12-16 12:27:42.839842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446742978492891135 len:12801 00:07:37.437 [2024-12-16 12:27:42.839858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:37.437 #36 NEW cov: 12411 ft: 15303 corp: 34/1032b lim: 50 exec/s: 36 rss: 74Mb L: 48/48 MS: 1 InsertByte- 00:07:37.437 [2024-12-16 12:27:42.879660] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:37.437 [2024-12-16 12:27:42.879688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.437 [2024-12-16 12:27:42.879724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:37.437 [2024-12-16 12:27:42.879742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.437 [2024-12-16 12:27:42.879797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:271579372060672 len:51 00:07:37.437 [2024-12-16 12:27:42.879814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.437 #37 NEW cov: 12411 ft: 15315 corp: 35/1071b lim: 50 exec/s: 37 rss: 74Mb L: 39/48 MS: 1 InsertRepeatedBytes- 00:07:37.437 [2024-12-16 12:27:42.919652] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:45079976738816 len:1 00:07:37.437 [2024-12-16 12:27:42.919679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.437 [2024-12-16 12:27:42.919719] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:69524319255920640 len:63233 00:07:37.437 [2024-12-16 12:27:42.919737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.437 #38 NEW cov: 12411 ft: 15349 corp: 36/1099b lim: 50 exec/s: 19 rss: 74Mb L: 28/48 MS: 1 ShuffleBytes- 00:07:37.437 #38 DONE cov: 12411 ft: 15349 corp: 36/1099b lim: 50 exec/s: 19 rss: 74Mb 00:07:37.437 Done 38 runs in 2 second(s) 00:07:37.696 12:27:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:37.696 12:27:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:37.696 12:27:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:37.696 12:27:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:37.697 12:27:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:37.697 12:27:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:37.697 12:27:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:37.697 12:27:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:37.697 12:27:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:37.697 12:27:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:37.697 12:27:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:37.697 12:27:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:37.697 12:27:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:37.697 12:27:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:37.697 12:27:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:37.697 12:27:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:37.697 12:27:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:37.697 12:27:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:37.697 12:27:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:37.697 [2024-12-16 12:27:43.096270] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:37.697 [2024-12-16 12:27:43.096330] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid992784 ] 00:07:37.982 [2024-12-16 12:27:43.281636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.982 [2024-12-16 12:27:43.313666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.982 [2024-12-16 12:27:43.373176] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.982 [2024-12-16 12:27:43.389485] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:37.982 INFO: Running with entropic power schedule (0xFF, 100). 00:07:37.982 INFO: Seed: 234699655 00:07:37.982 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:37.982 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:37.982 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:37.982 INFO: A corpus is not provided, starting from an empty corpus 00:07:37.982 #2 INITED exec/s: 0 rss: 66Mb 00:07:37.982 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:37.982 This may also happen if the target rejected all inputs we tried so far 00:07:37.982 [2024-12-16 12:27:43.435180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:37.982 [2024-12-16 12:27:43.435210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.982 [2024-12-16 12:27:43.435253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:37.982 [2024-12-16 12:27:43.435270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.982 [2024-12-16 12:27:43.435326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:37.982 [2024-12-16 12:27:43.435343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.982 [2024-12-16 12:27:43.435401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:37.982 [2024-12-16 12:27:43.435418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.242 NEW_FUNC[1/718]: 0x45df18 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:38.242 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:38.242 #7 NEW cov: 12234 ft: 12232 corp: 2/74b lim: 90 exec/s: 0 rss: 73Mb L: 73/73 MS: 5 InsertByte-CopyPart-ShuffleBytes-EraseBytes-InsertRepeatedBytes- 00:07:38.242 [2024-12-16 12:27:43.745861] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.242 [2024-12-16 12:27:43.745893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.242 [2024-12-16 12:27:43.745930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:38.242 [2024-12-16 12:27:43.745946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.242 [2024-12-16 12:27:43.745998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:38.242 [2024-12-16 12:27:43.746012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.242 [2024-12-16 12:27:43.746064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:38.242 [2024-12-16 12:27:43.746081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.242 #18 NEW cov: 12355 ft: 12780 corp: 3/153b lim: 90 exec/s: 0 rss: 73Mb L: 79/79 MS: 1 InsertRepeatedBytes- 00:07:38.242 [2024-12-16 12:27:43.785878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.242 [2024-12-16 12:27:43.785906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.242 [2024-12-16 12:27:43.785948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:38.242 [2024-12-16 12:27:43.785964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.242 [2024-12-16 12:27:43.786016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:38.242 [2024-12-16 12:27:43.786031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.242 [2024-12-16 12:27:43.786085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:38.242 [2024-12-16 12:27:43.786100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.501 #19 NEW cov: 12361 ft: 13096 corp: 4/226b lim: 90 exec/s: 0 rss: 73Mb L: 73/79 MS: 1 ChangeByte- 00:07:38.501 [2024-12-16 12:27:43.846049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.501 [2024-12-16 12:27:43.846076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.501 [2024-12-16 12:27:43.846117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:38.501 [2024-12-16 12:27:43.846133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.501 [2024-12-16 12:27:43.846184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:38.501 [2024-12-16 12:27:43.846200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.501 [2024-12-16 12:27:43.846253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:38.501 [2024-12-16 12:27:43.846268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.501 #20 NEW cov: 12446 ft: 13389 corp: 5/299b lim: 90 exec/s: 0 rss: 73Mb L: 73/79 MS: 1 ShuffleBytes- 00:07:38.501 [2024-12-16 12:27:43.886126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.501 [2024-12-16 12:27:43.886154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.501 [2024-12-16 12:27:43.886192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:38.501 [2024-12-16 12:27:43.886207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.501 [2024-12-16 12:27:43.886259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:38.501 [2024-12-16 12:27:43.886274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.501 [2024-12-16 12:27:43.886326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:38.501 [2024-12-16 12:27:43.886341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.502 #21 NEW cov: 12446 ft: 13466 corp: 6/379b lim: 90 exec/s: 0 rss: 73Mb L: 80/80 MS: 1 InsertRepeatedBytes- 00:07:38.502 [2024-12-16 12:27:43.946323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.502 [2024-12-16 12:27:43.946351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.502 [2024-12-16 12:27:43.946392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:38.502 [2024-12-16 12:27:43.946407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.502 [2024-12-16 12:27:43.946462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:38.502 [2024-12-16 12:27:43.946479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.502 [2024-12-16 12:27:43.946533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:38.502 [2024-12-16 12:27:43.946548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.502 #22 NEW cov: 12446 ft: 13529 corp: 7/459b lim: 90 exec/s: 0 rss: 73Mb L: 80/80 MS: 1 CrossOver- 00:07:38.502 [2024-12-16 12:27:44.006467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.502 [2024-12-16 12:27:44.006495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.502 [2024-12-16 12:27:44.006539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:38.502 [2024-12-16 12:27:44.006555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.502 [2024-12-16 12:27:44.006607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:38.502 [2024-12-16 12:27:44.006625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.502 [2024-12-16 12:27:44.006677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:38.502 [2024-12-16 12:27:44.006693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.502 #23 NEW cov: 12446 ft: 13598 corp: 8/544b lim: 90 exec/s: 0 rss: 73Mb L: 85/85 MS: 1 InsertRepeatedBytes- 00:07:38.761 [2024-12-16 12:27:44.066657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.761 [2024-12-16 12:27:44.066684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.761 [2024-12-16 12:27:44.066730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:38.762 [2024-12-16 12:27:44.066745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.762 [2024-12-16 12:27:44.066796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:38.762 [2024-12-16 12:27:44.066812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.762 [2024-12-16 12:27:44.066863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:38.762 [2024-12-16 12:27:44.066877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.762 #24 NEW cov: 12446 ft: 13618 corp: 9/625b lim: 90 exec/s: 0 rss: 73Mb L: 81/85 MS: 1 InsertByte- 00:07:38.762 [2024-12-16 12:27:44.106758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.762 [2024-12-16 12:27:44.106785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.762 [2024-12-16 12:27:44.106832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:38.762 [2024-12-16 12:27:44.106849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.762 [2024-12-16 12:27:44.106905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:38.762 [2024-12-16 12:27:44.106921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.762 [2024-12-16 12:27:44.106972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:38.762 [2024-12-16 12:27:44.106988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.762 #25 NEW cov: 12446 ft: 13630 corp: 10/707b lim: 90 exec/s: 0 rss: 73Mb L: 82/85 MS: 1 EraseBytes- 00:07:38.762 [2024-12-16 12:27:44.166940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.762 [2024-12-16 12:27:44.166967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.762 [2024-12-16 12:27:44.167008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:38.762 [2024-12-16 12:27:44.167023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.762 [2024-12-16 12:27:44.167075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:38.762 [2024-12-16 12:27:44.167092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.762 [2024-12-16 12:27:44.167146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:38.762 [2024-12-16 12:27:44.167161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.762 #26 NEW cov: 12446 ft: 13670 corp: 11/787b lim: 90 exec/s: 0 rss: 73Mb L: 80/85 MS: 1 CrossOver- 00:07:38.762 [2024-12-16 12:27:44.206627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.762 [2024-12-16 12:27:44.206654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.762 #27 NEW cov: 12446 ft: 14500 corp: 12/820b lim: 90 exec/s: 0 rss: 74Mb L: 33/85 MS: 1 InsertRepeatedBytes- 00:07:38.762 [2024-12-16 12:27:44.246702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.762 [2024-12-16 12:27:44.246728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.762 #28 NEW cov: 12446 ft: 14646 corp: 13/854b lim: 90 exec/s: 0 rss: 74Mb L: 34/85 MS: 1 InsertByte- 00:07:38.762 [2024-12-16 12:27:44.307267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.762 [2024-12-16 12:27:44.307294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.762 [2024-12-16 12:27:44.307338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:38.762 [2024-12-16 12:27:44.307354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.762 [2024-12-16 12:27:44.307408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:38.762 [2024-12-16 12:27:44.307424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.762 [2024-12-16 12:27:44.307476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:38.762 [2024-12-16 12:27:44.307490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.022 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:39.022 #29 NEW cov: 12469 ft: 14687 corp: 14/940b lim: 90 exec/s: 0 rss: 74Mb L: 86/86 MS: 1 CrossOver- 00:07:39.022 [2024-12-16 12:27:44.347243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.022 [2024-12-16 12:27:44.347269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.022 [2024-12-16 12:27:44.347308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.022 [2024-12-16 12:27:44.347324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.022 [2024-12-16 12:27:44.347377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:39.022 [2024-12-16 12:27:44.347393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.022 #30 NEW cov: 12469 ft: 14984 corp: 15/1003b lim: 90 exec/s: 0 rss: 74Mb L: 63/86 MS: 1 EraseBytes- 00:07:39.022 [2024-12-16 12:27:44.387505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.022 [2024-12-16 12:27:44.387532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.022 [2024-12-16 12:27:44.387576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.022 [2024-12-16 12:27:44.387591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.022 [2024-12-16 12:27:44.387661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:39.022 [2024-12-16 12:27:44.387676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.022 [2024-12-16 12:27:44.387729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:39.022 [2024-12-16 12:27:44.387745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.022 #31 NEW cov: 12469 ft: 15014 corp: 16/1088b lim: 90 exec/s: 0 rss: 74Mb L: 85/86 MS: 1 CrossOver- 00:07:39.022 [2024-12-16 12:27:44.427589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.022 [2024-12-16 12:27:44.427622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.022 [2024-12-16 12:27:44.427673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.022 [2024-12-16 12:27:44.427690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.022 [2024-12-16 12:27:44.427742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:39.022 [2024-12-16 12:27:44.427758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.022 [2024-12-16 12:27:44.427811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:39.022 [2024-12-16 12:27:44.427829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.022 #32 NEW cov: 12469 ft: 15033 corp: 17/1168b lim: 90 exec/s: 32 rss: 74Mb L: 80/86 MS: 1 ChangeBinInt- 00:07:39.022 [2024-12-16 12:27:44.467440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.022 [2024-12-16 12:27:44.467467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.022 [2024-12-16 12:27:44.467512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.022 [2024-12-16 12:27:44.467532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.022 #36 NEW cov: 12469 ft: 15369 corp: 18/1207b lim: 90 exec/s: 36 rss: 74Mb L: 39/86 MS: 4 InsertByte-EraseBytes-ChangeBinInt-InsertRepeatedBytes- 00:07:39.022 [2024-12-16 12:27:44.507840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.022 [2024-12-16 12:27:44.507867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.022 [2024-12-16 12:27:44.507912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.022 [2024-12-16 12:27:44.507929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.022 [2024-12-16 12:27:44.507981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:39.022 [2024-12-16 12:27:44.507997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.022 [2024-12-16 12:27:44.508051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:39.022 [2024-12-16 12:27:44.508067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.022 #37 NEW cov: 12469 ft: 15463 corp: 19/1296b lim: 90 exec/s: 37 rss: 74Mb L: 89/89 MS: 1 InsertRepeatedBytes- 00:07:39.022 [2024-12-16 12:27:44.568058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.022 [2024-12-16 12:27:44.568085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.022 [2024-12-16 12:27:44.568129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.022 [2024-12-16 12:27:44.568145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.022 [2024-12-16 12:27:44.568196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:39.022 [2024-12-16 12:27:44.568212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.022 [2024-12-16 12:27:44.568266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:39.022 [2024-12-16 12:27:44.568282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.282 #38 NEW cov: 12469 ft: 15500 corp: 20/1381b lim: 90 exec/s: 38 rss: 74Mb L: 85/89 MS: 1 ChangeBit- 00:07:39.282 [2024-12-16 12:27:44.628212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.282 [2024-12-16 12:27:44.628238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.282 [2024-12-16 12:27:44.628281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.282 [2024-12-16 12:27:44.628296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.282 [2024-12-16 12:27:44.628350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:39.282 [2024-12-16 12:27:44.628366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.282 [2024-12-16 12:27:44.628420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:39.282 [2024-12-16 12:27:44.628435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.282 #39 NEW cov: 12469 ft: 15511 corp: 21/1461b lim: 90 exec/s: 39 rss: 74Mb L: 80/89 MS: 1 EraseBytes- 00:07:39.282 [2024-12-16 12:27:44.688357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.282 [2024-12-16 12:27:44.688384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.282 [2024-12-16 12:27:44.688428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.282 [2024-12-16 12:27:44.688443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.282 [2024-12-16 12:27:44.688497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:39.282 [2024-12-16 12:27:44.688513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.282 [2024-12-16 12:27:44.688568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:39.282 [2024-12-16 12:27:44.688582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.282 #40 NEW cov: 12469 ft: 15525 corp: 22/1547b lim: 90 exec/s: 40 rss: 74Mb L: 86/89 MS: 1 CrossOver- 00:07:39.282 [2024-12-16 12:27:44.748567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.282 [2024-12-16 12:27:44.748595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.282 [2024-12-16 12:27:44.748649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.282 [2024-12-16 12:27:44.748666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.282 [2024-12-16 12:27:44.748716] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:39.282 [2024-12-16 12:27:44.748731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.282 [2024-12-16 12:27:44.748784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:39.282 [2024-12-16 12:27:44.748799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.282 #41 NEW cov: 12469 ft: 15549 corp: 23/1634b lim: 90 exec/s: 41 rss: 74Mb L: 87/89 MS: 1 InsertByte- 00:07:39.282 [2024-12-16 12:27:44.808757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.282 [2024-12-16 12:27:44.808795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.282 [2024-12-16 12:27:44.808843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.282 [2024-12-16 12:27:44.808859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.282 [2024-12-16 12:27:44.808909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:39.282 [2024-12-16 12:27:44.808924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.282 [2024-12-16 12:27:44.808977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:39.282 [2024-12-16 12:27:44.808994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.542 #42 NEW cov: 12469 ft: 15601 corp: 24/1721b lim: 90 exec/s: 42 rss: 74Mb L: 87/89 MS: 1 ChangeByte- 00:07:39.542 [2024-12-16 12:27:44.868960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.542 [2024-12-16 12:27:44.868990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.542 [2024-12-16 12:27:44.869029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.542 [2024-12-16 12:27:44.869045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.542 [2024-12-16 12:27:44.869098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:39.542 [2024-12-16 12:27:44.869113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.542 [2024-12-16 12:27:44.869164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:39.542 [2024-12-16 12:27:44.869180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.542 #43 NEW cov: 12469 ft: 15641 corp: 25/1794b lim: 90 exec/s: 43 rss: 74Mb L: 73/89 MS: 1 ChangeBit- 00:07:39.542 [2024-12-16 12:27:44.928790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.542 [2024-12-16 12:27:44.928818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.542 [2024-12-16 12:27:44.928868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.542 [2024-12-16 12:27:44.928885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.542 #44 NEW cov: 12469 ft: 15667 corp: 26/1833b lim: 90 exec/s: 44 rss: 75Mb L: 39/89 MS: 1 CopyPart- 00:07:39.542 [2024-12-16 12:27:44.989255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.542 [2024-12-16 12:27:44.989283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.542 [2024-12-16 12:27:44.989327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.542 [2024-12-16 12:27:44.989343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.542 [2024-12-16 12:27:44.989396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:39.542 [2024-12-16 12:27:44.989412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.542 [2024-12-16 12:27:44.989466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:39.542 [2024-12-16 12:27:44.989482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.542 #45 NEW cov: 12469 ft: 15682 corp: 27/1912b lim: 90 exec/s: 45 rss: 75Mb L: 79/89 MS: 1 ChangeBit- 00:07:39.542 [2024-12-16 12:27:45.029305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.542 [2024-12-16 12:27:45.029334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.542 [2024-12-16 12:27:45.029375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.542 [2024-12-16 12:27:45.029391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.542 [2024-12-16 12:27:45.029442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:39.542 [2024-12-16 12:27:45.029458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.542 [2024-12-16 12:27:45.029511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:39.542 [2024-12-16 12:27:45.029531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.542 #46 NEW cov: 12469 ft: 15697 corp: 28/1993b lim: 90 exec/s: 46 rss: 75Mb L: 81/89 MS: 1 ChangeByte- 00:07:39.542 [2024-12-16 12:27:45.089532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.542 [2024-12-16 12:27:45.089559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.542 [2024-12-16 12:27:45.089605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.542 [2024-12-16 12:27:45.089626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.542 [2024-12-16 12:27:45.089679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:39.542 [2024-12-16 12:27:45.089696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.542 [2024-12-16 12:27:45.089746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:39.542 [2024-12-16 12:27:45.089760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.801 #47 NEW cov: 12469 ft: 15707 corp: 29/2072b lim: 90 exec/s: 47 rss: 75Mb L: 79/89 MS: 1 CrossOver- 00:07:39.801 [2024-12-16 12:27:45.129563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.801 [2024-12-16 12:27:45.129591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.801 [2024-12-16 12:27:45.129644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.801 [2024-12-16 12:27:45.129660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.801 [2024-12-16 12:27:45.129712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:39.801 [2024-12-16 12:27:45.129729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.801 [2024-12-16 12:27:45.129779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:39.801 [2024-12-16 12:27:45.129796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.801 #48 NEW cov: 12469 ft: 15715 corp: 30/2146b lim: 90 exec/s: 48 rss: 75Mb L: 74/89 MS: 1 InsertByte- 00:07:39.801 [2024-12-16 12:27:45.169702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.801 [2024-12-16 12:27:45.169729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.801 [2024-12-16 12:27:45.169769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.801 [2024-12-16 12:27:45.169785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.801 [2024-12-16 12:27:45.169836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:39.801 [2024-12-16 12:27:45.169853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.801 [2024-12-16 12:27:45.169906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:39.801 [2024-12-16 12:27:45.169921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.801 #49 NEW cov: 12469 ft: 15727 corp: 31/2219b lim: 90 exec/s: 49 rss: 75Mb L: 73/89 MS: 1 CMP- DE: "\015\000"- 00:07:39.801 [2024-12-16 12:27:45.209799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.801 [2024-12-16 12:27:45.209826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.802 [2024-12-16 12:27:45.209872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.802 [2024-12-16 12:27:45.209888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.802 [2024-12-16 12:27:45.209940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:39.802 [2024-12-16 12:27:45.209956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.802 [2024-12-16 12:27:45.210007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:39.802 [2024-12-16 12:27:45.210023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.802 #50 NEW cov: 12469 ft: 15733 corp: 32/2298b lim: 90 exec/s: 50 rss: 75Mb L: 79/89 MS: 1 ShuffleBytes- 00:07:39.802 [2024-12-16 12:27:45.270040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.802 [2024-12-16 12:27:45.270068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.802 [2024-12-16 12:27:45.270106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.802 [2024-12-16 12:27:45.270122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.802 [2024-12-16 12:27:45.270176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:39.802 [2024-12-16 12:27:45.270192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.802 [2024-12-16 12:27:45.270244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:39.802 [2024-12-16 12:27:45.270259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.802 #51 NEW cov: 12469 ft: 15752 corp: 33/2384b lim: 90 exec/s: 51 rss: 75Mb L: 86/89 MS: 1 ChangeByte- 00:07:39.802 [2024-12-16 12:27:45.309988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.802 [2024-12-16 12:27:45.310016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.802 [2024-12-16 12:27:45.310056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.802 [2024-12-16 12:27:45.310072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.802 [2024-12-16 12:27:45.310126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:39.802 [2024-12-16 12:27:45.310143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.802 #54 NEW cov: 12469 ft: 15764 corp: 34/2439b lim: 90 exec/s: 54 rss: 75Mb L: 55/89 MS: 3 ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:07:39.802 [2024-12-16 12:27:45.350171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.802 [2024-12-16 12:27:45.350199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.802 [2024-12-16 12:27:45.350239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.802 [2024-12-16 12:27:45.350258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.802 [2024-12-16 12:27:45.350310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:39.802 [2024-12-16 12:27:45.350326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.802 [2024-12-16 12:27:45.350379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:39.802 [2024-12-16 12:27:45.350395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.062 #55 NEW cov: 12469 ft: 15800 corp: 35/2525b lim: 90 exec/s: 55 rss: 75Mb L: 86/89 MS: 1 InsertRepeatedBytes- 00:07:40.062 [2024-12-16 12:27:45.390330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:40.062 [2024-12-16 12:27:45.390358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.062 [2024-12-16 12:27:45.390395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:40.062 [2024-12-16 12:27:45.390410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.062 [2024-12-16 12:27:45.390462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:40.062 [2024-12-16 12:27:45.390478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.062 [2024-12-16 12:27:45.390530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:40.062 [2024-12-16 12:27:45.390546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.062 #56 NEW cov: 12469 ft: 15802 corp: 36/2606b lim: 90 exec/s: 56 rss: 75Mb L: 81/89 MS: 1 InsertByte- 00:07:40.062 [2024-12-16 12:27:45.430394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:40.062 [2024-12-16 12:27:45.430421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.062 [2024-12-16 12:27:45.430466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:40.062 [2024-12-16 12:27:45.430482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.062 [2024-12-16 12:27:45.430535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:40.062 [2024-12-16 12:27:45.430551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.062 [2024-12-16 12:27:45.430605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:40.062 [2024-12-16 12:27:45.430624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.062 #57 NEW cov: 12469 ft: 15805 corp: 37/2689b lim: 90 exec/s: 28 rss: 75Mb L: 83/89 MS: 1 CMP- DE: "E?"- 00:07:40.062 #57 DONE cov: 12469 ft: 15805 corp: 37/2689b lim: 90 exec/s: 28 rss: 75Mb 00:07:40.062 ###### Recommended dictionary. ###### 00:07:40.062 "\015\000" # Uses: 0 00:07:40.062 "E?" # Uses: 0 00:07:40.062 ###### End of recommended dictionary. ###### 00:07:40.062 Done 57 runs in 2 second(s) 00:07:40.062 12:27:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:40.062 12:27:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:40.062 12:27:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:40.062 12:27:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:40.062 12:27:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:40.062 12:27:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:40.062 12:27:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:40.062 12:27:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:40.062 12:27:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:40.062 12:27:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:40.062 12:27:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:40.062 12:27:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:40.062 12:27:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:40.062 12:27:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:40.062 12:27:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:40.062 12:27:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:40.062 12:27:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:40.062 12:27:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:40.062 12:27:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:40.062 [2024-12-16 12:27:45.621600] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:40.062 [2024-12-16 12:27:45.621687] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid993315 ] 00:07:40.321 [2024-12-16 12:27:45.811864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.321 [2024-12-16 12:27:45.844894] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.580 [2024-12-16 12:27:45.903692] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.580 [2024-12-16 12:27:45.920016] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:40.580 INFO: Running with entropic power schedule (0xFF, 100). 00:07:40.580 INFO: Seed: 2765718364 00:07:40.580 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:40.580 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:40.580 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:40.580 INFO: A corpus is not provided, starting from an empty corpus 00:07:40.580 #2 INITED exec/s: 0 rss: 65Mb 00:07:40.580 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:40.580 This may also happen if the target rejected all inputs we tried so far 00:07:40.580 [2024-12-16 12:27:45.965321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:40.580 [2024-12-16 12:27:45.965351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.580 [2024-12-16 12:27:45.965407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:40.580 [2024-12-16 12:27:45.965424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.840 NEW_FUNC[1/718]: 0x461148 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:40.840 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:40.840 #3 NEW cov: 12217 ft: 12216 corp: 2/28b lim: 50 exec/s: 0 rss: 72Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:07:40.840 [2024-12-16 12:27:46.286189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:40.840 [2024-12-16 12:27:46.286222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.840 [2024-12-16 12:27:46.286281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:40.840 [2024-12-16 12:27:46.286298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.840 #4 NEW cov: 12330 ft: 12710 corp: 3/55b lim: 50 exec/s: 0 rss: 72Mb L: 27/27 MS: 1 ChangeBinInt- 00:07:40.840 [2024-12-16 12:27:46.346622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:40.840 [2024-12-16 12:27:46.346654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.840 [2024-12-16 12:27:46.346693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:40.840 [2024-12-16 12:27:46.346711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.840 [2024-12-16 12:27:46.346767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:40.840 [2024-12-16 12:27:46.346783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.840 [2024-12-16 12:27:46.346841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:40.840 [2024-12-16 12:27:46.346855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.840 #5 NEW cov: 12336 ft: 13377 corp: 4/100b lim: 50 exec/s: 0 rss: 72Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:07:41.099 [2024-12-16 12:27:46.406316] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.099 [2024-12-16 12:27:46.406345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.099 #6 NEW cov: 12421 ft: 14414 corp: 5/115b lim: 50 exec/s: 0 rss: 72Mb L: 15/45 MS: 1 EraseBytes- 00:07:41.099 [2024-12-16 12:27:46.446448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.099 [2024-12-16 12:27:46.446474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.099 #7 NEW cov: 12421 ft: 14496 corp: 6/129b lim: 50 exec/s: 0 rss: 72Mb L: 14/45 MS: 1 InsertRepeatedBytes- 00:07:41.099 [2024-12-16 12:27:46.486649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.099 [2024-12-16 12:27:46.486677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.099 [2024-12-16 12:27:46.486717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.099 [2024-12-16 12:27:46.486734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.099 #8 NEW cov: 12421 ft: 14699 corp: 7/152b lim: 50 exec/s: 0 rss: 72Mb L: 23/45 MS: 1 EraseBytes- 00:07:41.099 [2024-12-16 12:27:46.526801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.099 [2024-12-16 12:27:46.526828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.099 [2024-12-16 12:27:46.526879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.099 [2024-12-16 12:27:46.526899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.099 #9 NEW cov: 12421 ft: 14811 corp: 8/175b lim: 50 exec/s: 0 rss: 72Mb L: 23/45 MS: 1 CMP- DE: "\377\377\377\377"- 00:07:41.099 [2024-12-16 12:27:46.587005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.099 [2024-12-16 12:27:46.587032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.099 [2024-12-16 12:27:46.587089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.099 [2024-12-16 12:27:46.587104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.099 #10 NEW cov: 12421 ft: 14852 corp: 9/198b lim: 50 exec/s: 0 rss: 72Mb L: 23/45 MS: 1 ChangeBinInt- 00:07:41.099 [2024-12-16 12:27:46.647166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.099 [2024-12-16 12:27:46.647193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.099 [2024-12-16 12:27:46.647239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.099 [2024-12-16 12:27:46.647256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.359 #11 NEW cov: 12421 ft: 14902 corp: 10/221b lim: 50 exec/s: 0 rss: 72Mb L: 23/45 MS: 1 ChangeBit- 00:07:41.359 [2024-12-16 12:27:46.687541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.359 [2024-12-16 12:27:46.687569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.359 [2024-12-16 12:27:46.687620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.359 [2024-12-16 12:27:46.687635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.359 [2024-12-16 12:27:46.687690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.359 [2024-12-16 12:27:46.687706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.359 [2024-12-16 12:27:46.687761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:41.359 [2024-12-16 12:27:46.687777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.359 #12 NEW cov: 12421 ft: 14941 corp: 11/264b lim: 50 exec/s: 0 rss: 72Mb L: 43/45 MS: 1 InsertRepeatedBytes- 00:07:41.359 [2024-12-16 12:27:46.727679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.359 [2024-12-16 12:27:46.727706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.359 [2024-12-16 12:27:46.727754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.359 [2024-12-16 12:27:46.727770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.359 [2024-12-16 12:27:46.727825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.359 [2024-12-16 12:27:46.727842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.359 [2024-12-16 12:27:46.727900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:41.359 [2024-12-16 12:27:46.727917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.359 #13 NEW cov: 12421 ft: 15024 corp: 12/309b lim: 50 exec/s: 0 rss: 72Mb L: 45/45 MS: 1 ShuffleBytes- 00:07:41.359 [2024-12-16 12:27:46.787379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.359 [2024-12-16 12:27:46.787407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.359 #14 NEW cov: 12421 ft: 15068 corp: 13/324b lim: 50 exec/s: 0 rss: 73Mb L: 15/45 MS: 1 ChangeBinInt- 00:07:41.359 [2024-12-16 12:27:46.847552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.359 [2024-12-16 12:27:46.847580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.359 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:41.359 #15 NEW cov: 12444 ft: 15117 corp: 14/341b lim: 50 exec/s: 0 rss: 73Mb L: 17/45 MS: 1 CMP- DE: "\001\000"- 00:07:41.359 [2024-12-16 12:27:46.887791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.359 [2024-12-16 12:27:46.887817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.359 [2024-12-16 12:27:46.887865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.359 [2024-12-16 12:27:46.887883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.359 #17 NEW cov: 12444 ft: 15152 corp: 15/369b lim: 50 exec/s: 0 rss: 73Mb L: 28/45 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:41.619 [2024-12-16 12:27:46.927896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.619 [2024-12-16 12:27:46.927922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.619 [2024-12-16 12:27:46.927962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.619 [2024-12-16 12:27:46.927978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.619 #18 NEW cov: 12444 ft: 15206 corp: 16/392b lim: 50 exec/s: 0 rss: 73Mb L: 23/45 MS: 1 ChangeBinInt- 00:07:41.619 [2024-12-16 12:27:46.968017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.619 [2024-12-16 12:27:46.968044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.619 [2024-12-16 12:27:46.968085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.619 [2024-12-16 12:27:46.968102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.619 #19 NEW cov: 12444 ft: 15217 corp: 17/416b lim: 50 exec/s: 19 rss: 73Mb L: 24/45 MS: 1 InsertByte- 00:07:41.619 [2024-12-16 12:27:47.008132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.619 [2024-12-16 12:27:47.008159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.619 [2024-12-16 12:27:47.008199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.619 [2024-12-16 12:27:47.008215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.619 #20 NEW cov: 12444 ft: 15228 corp: 18/436b lim: 50 exec/s: 20 rss: 73Mb L: 20/45 MS: 1 EraseBytes- 00:07:41.619 [2024-12-16 12:27:47.068300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.619 [2024-12-16 12:27:47.068327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.619 [2024-12-16 12:27:47.068373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.619 [2024-12-16 12:27:47.068390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.619 #21 NEW cov: 12444 ft: 15258 corp: 19/458b lim: 50 exec/s: 21 rss: 73Mb L: 22/45 MS: 1 EraseBytes- 00:07:41.619 [2024-12-16 12:27:47.108414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.619 [2024-12-16 12:27:47.108441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.619 [2024-12-16 12:27:47.108495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.619 [2024-12-16 12:27:47.108512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.619 #22 NEW cov: 12444 ft: 15263 corp: 20/478b lim: 50 exec/s: 22 rss: 73Mb L: 20/45 MS: 1 CopyPart- 00:07:41.619 [2024-12-16 12:27:47.168595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.619 [2024-12-16 12:27:47.168626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.619 [2024-12-16 12:27:47.168669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.619 [2024-12-16 12:27:47.168685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.878 #23 NEW cov: 12444 ft: 15302 corp: 21/502b lim: 50 exec/s: 23 rss: 73Mb L: 24/45 MS: 1 InsertByte- 00:07:41.878 [2024-12-16 12:27:47.208704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.878 [2024-12-16 12:27:47.208730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.878 [2024-12-16 12:27:47.208770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.878 [2024-12-16 12:27:47.208787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.878 #24 NEW cov: 12444 ft: 15372 corp: 22/525b lim: 50 exec/s: 24 rss: 73Mb L: 23/45 MS: 1 ChangeBinInt- 00:07:41.878 [2024-12-16 12:27:47.268908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.878 [2024-12-16 12:27:47.268933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.878 [2024-12-16 12:27:47.268974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.878 [2024-12-16 12:27:47.268991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.878 #25 NEW cov: 12444 ft: 15384 corp: 23/549b lim: 50 exec/s: 25 rss: 73Mb L: 24/45 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:41.878 [2024-12-16 12:27:47.329354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.878 [2024-12-16 12:27:47.329381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.878 [2024-12-16 12:27:47.329431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.878 [2024-12-16 12:27:47.329448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.878 [2024-12-16 12:27:47.329502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.878 [2024-12-16 12:27:47.329519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.878 [2024-12-16 12:27:47.329578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:41.878 [2024-12-16 12:27:47.329595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.878 #26 NEW cov: 12444 ft: 15402 corp: 24/593b lim: 50 exec/s: 26 rss: 73Mb L: 44/45 MS: 1 CrossOver- 00:07:41.878 [2024-12-16 12:27:47.389503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.878 [2024-12-16 12:27:47.389529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.878 [2024-12-16 12:27:47.389582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.878 [2024-12-16 12:27:47.389598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.878 [2024-12-16 12:27:47.389670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.878 [2024-12-16 12:27:47.389687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.878 [2024-12-16 12:27:47.389743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:41.878 [2024-12-16 12:27:47.389769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.878 #27 NEW cov: 12444 ft: 15422 corp: 25/633b lim: 50 exec/s: 27 rss: 73Mb L: 40/45 MS: 1 InsertRepeatedBytes- 00:07:41.878 [2024-12-16 12:27:47.429293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.878 [2024-12-16 12:27:47.429320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.878 [2024-12-16 12:27:47.429360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.878 [2024-12-16 12:27:47.429375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.138 #29 NEW cov: 12444 ft: 15428 corp: 26/659b lim: 50 exec/s: 29 rss: 73Mb L: 26/45 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:42.138 [2024-12-16 12:27:47.469402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:42.138 [2024-12-16 12:27:47.469428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.138 [2024-12-16 12:27:47.469481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:42.138 [2024-12-16 12:27:47.469498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.138 #30 NEW cov: 12444 ft: 15455 corp: 27/682b lim: 50 exec/s: 30 rss: 73Mb L: 23/45 MS: 1 ShuffleBytes- 00:07:42.138 [2024-12-16 12:27:47.529567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:42.138 [2024-12-16 12:27:47.529593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.138 [2024-12-16 12:27:47.529638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:42.138 [2024-12-16 12:27:47.529670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.138 #31 NEW cov: 12444 ft: 15461 corp: 28/704b lim: 50 exec/s: 31 rss: 73Mb L: 22/45 MS: 1 CrossOver- 00:07:42.138 [2024-12-16 12:27:47.570007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:42.138 [2024-12-16 12:27:47.570033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.138 [2024-12-16 12:27:47.570082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:42.138 [2024-12-16 12:27:47.570099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.138 [2024-12-16 12:27:47.570155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:42.138 [2024-12-16 12:27:47.570172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.138 [2024-12-16 12:27:47.570231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:42.138 [2024-12-16 12:27:47.570247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.138 #37 NEW cov: 12444 ft: 15504 corp: 29/744b lim: 50 exec/s: 37 rss: 73Mb L: 40/45 MS: 1 CrossOver- 00:07:42.138 [2024-12-16 12:27:47.629885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:42.138 [2024-12-16 12:27:47.629911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.138 [2024-12-16 12:27:47.629961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:42.138 [2024-12-16 12:27:47.629978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.138 #38 NEW cov: 12444 ft: 15511 corp: 30/764b lim: 50 exec/s: 38 rss: 74Mb L: 20/45 MS: 1 ChangeByte- 00:07:42.138 [2024-12-16 12:27:47.690362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:42.138 [2024-12-16 12:27:47.690389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.138 [2024-12-16 12:27:47.690436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:42.138 [2024-12-16 12:27:47.690453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.138 [2024-12-16 12:27:47.690508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:42.138 [2024-12-16 12:27:47.690525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.138 [2024-12-16 12:27:47.690581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:42.138 [2024-12-16 12:27:47.690595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.398 #39 NEW cov: 12444 ft: 15529 corp: 31/810b lim: 50 exec/s: 39 rss: 74Mb L: 46/46 MS: 1 CopyPart- 00:07:42.398 [2024-12-16 12:27:47.750206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:42.398 [2024-12-16 12:27:47.750233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.398 [2024-12-16 12:27:47.750271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:42.398 [2024-12-16 12:27:47.750287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.398 #40 NEW cov: 12444 ft: 15563 corp: 32/833b lim: 50 exec/s: 40 rss: 74Mb L: 23/46 MS: 1 CrossOver- 00:07:42.398 [2024-12-16 12:27:47.810386] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:42.398 [2024-12-16 12:27:47.810414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.398 [2024-12-16 12:27:47.810454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:42.398 [2024-12-16 12:27:47.810474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.398 #41 NEW cov: 12444 ft: 15599 corp: 33/856b lim: 50 exec/s: 41 rss: 74Mb L: 23/46 MS: 1 ChangeByte- 00:07:42.398 [2024-12-16 12:27:47.850527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:42.398 [2024-12-16 12:27:47.850554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.398 [2024-12-16 12:27:47.850596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:42.398 [2024-12-16 12:27:47.850615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.398 #42 NEW cov: 12444 ft: 15626 corp: 34/882b lim: 50 exec/s: 42 rss: 74Mb L: 26/46 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:42.398 [2024-12-16 12:27:47.910980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:42.398 [2024-12-16 12:27:47.911008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.398 [2024-12-16 12:27:47.911056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:42.398 [2024-12-16 12:27:47.911074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.398 [2024-12-16 12:27:47.911130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:42.398 [2024-12-16 12:27:47.911147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.398 [2024-12-16 12:27:47.911202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:42.398 [2024-12-16 12:27:47.911219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.398 #43 NEW cov: 12444 ft: 15643 corp: 35/928b lim: 50 exec/s: 43 rss: 74Mb L: 46/46 MS: 1 InsertByte- 00:07:42.398 [2024-12-16 12:27:47.950626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:42.398 [2024-12-16 12:27:47.950655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.657 #44 NEW cov: 12444 ft: 15658 corp: 36/947b lim: 50 exec/s: 22 rss: 74Mb L: 19/46 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:42.657 #44 DONE cov: 12444 ft: 15658 corp: 36/947b lim: 50 exec/s: 22 rss: 74Mb 00:07:42.657 ###### Recommended dictionary. ###### 00:07:42.657 "\377\377\377\377" # Uses: 1 00:07:42.657 "\001\000" # Uses: 3 00:07:42.657 ###### End of recommended dictionary. ###### 00:07:42.657 Done 44 runs in 2 second(s) 00:07:42.657 12:27:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:42.658 12:27:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:42.658 12:27:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:42.658 12:27:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:42.658 12:27:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:42.658 12:27:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:42.658 12:27:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:42.658 12:27:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:42.658 12:27:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:42.658 12:27:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:42.658 12:27:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:42.658 12:27:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:42.658 12:27:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:42.658 12:27:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:42.658 12:27:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:42.658 12:27:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:42.658 12:27:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:42.658 12:27:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:42.658 12:27:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:42.658 [2024-12-16 12:27:48.132624] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:42.658 [2024-12-16 12:27:48.132691] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid993739 ] 00:07:42.917 [2024-12-16 12:27:48.320871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.917 [2024-12-16 12:27:48.351926] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.917 [2024-12-16 12:27:48.411079] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.917 [2024-12-16 12:27:48.427389] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:42.917 INFO: Running with entropic power schedule (0xFF, 100). 00:07:42.917 INFO: Seed: 976728097 00:07:42.917 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:42.917 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:42.917 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:42.917 INFO: A corpus is not provided, starting from an empty corpus 00:07:42.917 #2 INITED exec/s: 0 rss: 66Mb 00:07:42.917 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:42.917 This may also happen if the target rejected all inputs we tried so far 00:07:42.917 [2024-12-16 12:27:48.476289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:42.917 [2024-12-16 12:27:48.476319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.917 [2024-12-16 12:27:48.476370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:42.917 [2024-12-16 12:27:48.476387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.436 NEW_FUNC[1/715]: 0x463418 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:43.436 NEW_FUNC[2/715]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:43.436 #10 NEW cov: 12212 ft: 12210 corp: 2/42b lim: 85 exec/s: 0 rss: 73Mb L: 41/41 MS: 3 ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:07:43.436 [2024-12-16 12:27:48.807349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.436 [2024-12-16 12:27:48.807385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.436 [2024-12-16 12:27:48.807446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.436 [2024-12-16 12:27:48.807464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.436 [2024-12-16 12:27:48.807527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:43.436 [2024-12-16 12:27:48.807544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.436 [2024-12-16 12:27:48.807603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:43.436 [2024-12-16 12:27:48.807628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.436 NEW_FUNC[1/3]: 0x17c9bb8 in nvme_ctrlr_process_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3959 00:07:43.436 NEW_FUNC[2/3]: 0x19a5ee8 in spdk_nvme_probe_poll_async /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme.c:1615 00:07:43.436 #11 NEW cov: 12356 ft: 13064 corp: 3/114b lim: 85 exec/s: 0 rss: 73Mb L: 72/72 MS: 1 InsertRepeatedBytes- 00:07:43.436 [2024-12-16 12:27:48.877136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.436 [2024-12-16 12:27:48.877163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.436 [2024-12-16 12:27:48.877215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.436 [2024-12-16 12:27:48.877230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.436 #17 NEW cov: 12362 ft: 13388 corp: 4/163b lim: 85 exec/s: 0 rss: 73Mb L: 49/72 MS: 1 CMP- DE: "\000\004\000\000\000\000\000\000"- 00:07:43.436 [2024-12-16 12:27:48.917210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.436 [2024-12-16 12:27:48.917239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.436 [2024-12-16 12:27:48.917296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.436 [2024-12-16 12:27:48.917312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.436 #18 NEW cov: 12447 ft: 13709 corp: 5/212b lim: 85 exec/s: 0 rss: 73Mb L: 49/72 MS: 1 CMP- DE: "\001\005_\274i\216(\002"- 00:07:43.436 [2024-12-16 12:27:48.957617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.436 [2024-12-16 12:27:48.957644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.436 [2024-12-16 12:27:48.957692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.436 [2024-12-16 12:27:48.957709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.436 [2024-12-16 12:27:48.957762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:43.436 [2024-12-16 12:27:48.957776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.436 [2024-12-16 12:27:48.957831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:43.436 [2024-12-16 12:27:48.957847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.436 #19 NEW cov: 12447 ft: 13878 corp: 6/286b lim: 85 exec/s: 0 rss: 73Mb L: 74/74 MS: 1 InsertRepeatedBytes- 00:07:43.696 [2024-12-16 12:27:49.017522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.696 [2024-12-16 12:27:49.017549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.696 [2024-12-16 12:27:49.017592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.696 [2024-12-16 12:27:49.017616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.696 #20 NEW cov: 12447 ft: 13967 corp: 7/335b lim: 85 exec/s: 0 rss: 73Mb L: 49/74 MS: 1 ChangeByte- 00:07:43.696 [2024-12-16 12:27:49.077643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.696 [2024-12-16 12:27:49.077670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.696 [2024-12-16 12:27:49.077709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.696 [2024-12-16 12:27:49.077726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.696 #21 NEW cov: 12447 ft: 14082 corp: 8/384b lim: 85 exec/s: 0 rss: 73Mb L: 49/74 MS: 1 ChangeBinInt- 00:07:43.696 [2024-12-16 12:27:49.117771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.696 [2024-12-16 12:27:49.117797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.696 [2024-12-16 12:27:49.117838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.696 [2024-12-16 12:27:49.117855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.696 #22 NEW cov: 12447 ft: 14097 corp: 9/433b lim: 85 exec/s: 0 rss: 73Mb L: 49/74 MS: 1 ChangeBinInt- 00:07:43.696 [2024-12-16 12:27:49.178238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.696 [2024-12-16 12:27:49.178265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.696 [2024-12-16 12:27:49.178314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.696 [2024-12-16 12:27:49.178330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.696 [2024-12-16 12:27:49.178383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:43.696 [2024-12-16 12:27:49.178397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.696 [2024-12-16 12:27:49.178454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:43.696 [2024-12-16 12:27:49.178470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.696 #23 NEW cov: 12447 ft: 14113 corp: 10/507b lim: 85 exec/s: 0 rss: 74Mb L: 74/74 MS: 1 CopyPart- 00:07:43.696 [2024-12-16 12:27:49.238110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.696 [2024-12-16 12:27:49.238136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.696 [2024-12-16 12:27:49.238181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.696 [2024-12-16 12:27:49.238198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.956 #24 NEW cov: 12447 ft: 14150 corp: 11/556b lim: 85 exec/s: 0 rss: 74Mb L: 49/74 MS: 1 ChangeBit- 00:07:43.956 [2024-12-16 12:27:49.298112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.956 [2024-12-16 12:27:49.298140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.956 #25 NEW cov: 12447 ft: 14967 corp: 12/589b lim: 85 exec/s: 0 rss: 74Mb L: 33/74 MS: 1 EraseBytes- 00:07:43.956 [2024-12-16 12:27:49.338334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.956 [2024-12-16 12:27:49.338360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.956 [2024-12-16 12:27:49.338398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.956 [2024-12-16 12:27:49.338415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.956 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:43.956 #26 NEW cov: 12470 ft: 14986 corp: 13/638b lim: 85 exec/s: 0 rss: 74Mb L: 49/74 MS: 1 ShuffleBytes- 00:07:43.956 [2024-12-16 12:27:49.378763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.956 [2024-12-16 12:27:49.378791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.956 [2024-12-16 12:27:49.378836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.956 [2024-12-16 12:27:49.378852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.956 [2024-12-16 12:27:49.378907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:43.956 [2024-12-16 12:27:49.378924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.956 [2024-12-16 12:27:49.378980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:43.956 [2024-12-16 12:27:49.378997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.956 #27 NEW cov: 12470 ft: 15000 corp: 14/718b lim: 85 exec/s: 0 rss: 74Mb L: 80/80 MS: 1 CopyPart- 00:07:43.956 [2024-12-16 12:27:49.438847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.956 [2024-12-16 12:27:49.438875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.956 [2024-12-16 12:27:49.438914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.956 [2024-12-16 12:27:49.438932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.956 [2024-12-16 12:27:49.438989] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:43.956 [2024-12-16 12:27:49.439006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.956 #28 NEW cov: 12470 ft: 15280 corp: 15/784b lim: 85 exec/s: 28 rss: 74Mb L: 66/80 MS: 1 InsertRepeatedBytes- 00:07:43.956 [2024-12-16 12:27:49.478747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.956 [2024-12-16 12:27:49.478774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.956 [2024-12-16 12:27:49.478829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.956 [2024-12-16 12:27:49.478844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.956 #29 NEW cov: 12470 ft: 15312 corp: 16/825b lim: 85 exec/s: 29 rss: 74Mb L: 41/80 MS: 1 PersAutoDict- DE: "\000\004\000\000\000\000\000\000"- 00:07:43.956 [2024-12-16 12:27:49.518725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.956 [2024-12-16 12:27:49.518753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.216 #30 NEW cov: 12479 ft: 15379 corp: 17/858b lim: 85 exec/s: 30 rss: 74Mb L: 33/80 MS: 1 CrossOver- 00:07:44.216 [2024-12-16 12:27:49.579048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.216 [2024-12-16 12:27:49.579075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.216 [2024-12-16 12:27:49.579116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.216 [2024-12-16 12:27:49.579131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.216 #36 NEW cov: 12479 ft: 15434 corp: 18/907b lim: 85 exec/s: 36 rss: 74Mb L: 49/80 MS: 1 ChangeByte- 00:07:44.216 [2024-12-16 12:27:49.619569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.216 [2024-12-16 12:27:49.619595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.216 [2024-12-16 12:27:49.619673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.216 [2024-12-16 12:27:49.619688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.216 [2024-12-16 12:27:49.619742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:44.216 [2024-12-16 12:27:49.619759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.216 [2024-12-16 12:27:49.619815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:44.216 [2024-12-16 12:27:49.619831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.216 [2024-12-16 12:27:49.619884] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:44.216 [2024-12-16 12:27:49.619901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:44.216 #37 NEW cov: 12479 ft: 15473 corp: 19/992b lim: 85 exec/s: 37 rss: 74Mb L: 85/85 MS: 1 CrossOver- 00:07:44.216 [2024-12-16 12:27:49.679325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.216 [2024-12-16 12:27:49.679352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.216 [2024-12-16 12:27:49.679402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.216 [2024-12-16 12:27:49.679419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.216 #38 NEW cov: 12479 ft: 15516 corp: 20/1034b lim: 85 exec/s: 38 rss: 74Mb L: 42/85 MS: 1 InsertByte- 00:07:44.216 [2024-12-16 12:27:49.739477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.216 [2024-12-16 12:27:49.739503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.216 [2024-12-16 12:27:49.739540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.216 [2024-12-16 12:27:49.739557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.216 #39 NEW cov: 12479 ft: 15589 corp: 21/1075b lim: 85 exec/s: 39 rss: 74Mb L: 41/85 MS: 1 CrossOver- 00:07:44.216 [2024-12-16 12:27:49.779482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.216 [2024-12-16 12:27:49.779510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.475 #40 NEW cov: 12479 ft: 15612 corp: 22/1108b lim: 85 exec/s: 40 rss: 74Mb L: 33/85 MS: 1 ShuffleBytes- 00:07:44.475 [2024-12-16 12:27:49.819698] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.475 [2024-12-16 12:27:49.819724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.475 [2024-12-16 12:27:49.819778] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.475 [2024-12-16 12:27:49.819793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.475 #41 NEW cov: 12479 ft: 15630 corp: 23/1146b lim: 85 exec/s: 41 rss: 74Mb L: 38/85 MS: 1 EraseBytes- 00:07:44.475 [2024-12-16 12:27:49.880332] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.475 [2024-12-16 12:27:49.880360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.475 [2024-12-16 12:27:49.880407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.475 [2024-12-16 12:27:49.880425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.475 [2024-12-16 12:27:49.880480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:44.476 [2024-12-16 12:27:49.880497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.476 [2024-12-16 12:27:49.880553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:44.476 [2024-12-16 12:27:49.880570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.476 [2024-12-16 12:27:49.880625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:44.476 [2024-12-16 12:27:49.880641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:44.476 #42 NEW cov: 12479 ft: 15703 corp: 24/1231b lim: 85 exec/s: 42 rss: 74Mb L: 85/85 MS: 1 ChangeBit- 00:07:44.476 [2024-12-16 12:27:49.940023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.476 [2024-12-16 12:27:49.940050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.476 [2024-12-16 12:27:49.940089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.476 [2024-12-16 12:27:49.940105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.476 #43 NEW cov: 12479 ft: 15711 corp: 25/1280b lim: 85 exec/s: 43 rss: 74Mb L: 49/85 MS: 1 PersAutoDict- DE: "\000\004\000\000\000\000\000\000"- 00:07:44.476 [2024-12-16 12:27:50.000260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.476 [2024-12-16 12:27:50.000287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.476 [2024-12-16 12:27:50.000341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.476 [2024-12-16 12:27:50.000358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.735 #44 NEW cov: 12479 ft: 15723 corp: 26/1319b lim: 85 exec/s: 44 rss: 75Mb L: 39/85 MS: 1 InsertByte- 00:07:44.735 [2024-12-16 12:27:50.060425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.735 [2024-12-16 12:27:50.060453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.735 [2024-12-16 12:27:50.060513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.735 [2024-12-16 12:27:50.060528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.735 #45 NEW cov: 12479 ft: 15739 corp: 27/1368b lim: 85 exec/s: 45 rss: 75Mb L: 49/85 MS: 1 CrossOver- 00:07:44.735 [2024-12-16 12:27:50.100794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.735 [2024-12-16 12:27:50.100823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.735 [2024-12-16 12:27:50.100869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.735 [2024-12-16 12:27:50.100886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.735 [2024-12-16 12:27:50.100942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:44.735 [2024-12-16 12:27:50.100959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.735 [2024-12-16 12:27:50.101017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:44.735 [2024-12-16 12:27:50.101033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.735 #46 NEW cov: 12479 ft: 15770 corp: 28/1442b lim: 85 exec/s: 46 rss: 75Mb L: 74/85 MS: 1 PersAutoDict- DE: "\001\005_\274i\216(\002"- 00:07:44.735 [2024-12-16 12:27:50.160544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.735 [2024-12-16 12:27:50.160572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.735 #47 NEW cov: 12479 ft: 15810 corp: 29/1469b lim: 85 exec/s: 47 rss: 75Mb L: 27/85 MS: 1 CrossOver- 00:07:44.735 [2024-12-16 12:27:50.200785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.735 [2024-12-16 12:27:50.200812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.735 [2024-12-16 12:27:50.200852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.735 [2024-12-16 12:27:50.200869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.736 #48 NEW cov: 12479 ft: 15868 corp: 30/1518b lim: 85 exec/s: 48 rss: 75Mb L: 49/85 MS: 1 ChangeBit- 00:07:44.736 [2024-12-16 12:27:50.261260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.736 [2024-12-16 12:27:50.261289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.736 [2024-12-16 12:27:50.261329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.736 [2024-12-16 12:27:50.261346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.736 [2024-12-16 12:27:50.261404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:44.736 [2024-12-16 12:27:50.261421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.736 [2024-12-16 12:27:50.261477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:44.736 [2024-12-16 12:27:50.261494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.995 #49 NEW cov: 12479 ft: 15884 corp: 31/1598b lim: 85 exec/s: 49 rss: 75Mb L: 80/85 MS: 1 PersAutoDict- DE: "\000\004\000\000\000\000\000\000"- 00:07:44.995 [2024-12-16 12:27:50.321144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.995 [2024-12-16 12:27:50.321171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.995 [2024-12-16 12:27:50.321226] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.995 [2024-12-16 12:27:50.321244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.995 #50 NEW cov: 12479 ft: 15894 corp: 32/1648b lim: 85 exec/s: 50 rss: 75Mb L: 50/85 MS: 1 InsertByte- 00:07:44.995 [2024-12-16 12:27:50.361091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.995 [2024-12-16 12:27:50.361117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.995 #51 NEW cov: 12479 ft: 15950 corp: 33/1681b lim: 85 exec/s: 51 rss: 75Mb L: 33/85 MS: 1 PersAutoDict- DE: "\001\005_\274i\216(\002"- 00:07:44.995 [2024-12-16 12:27:50.401332] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.995 [2024-12-16 12:27:50.401359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.995 [2024-12-16 12:27:50.401401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.995 [2024-12-16 12:27:50.401418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.995 #52 NEW cov: 12479 ft: 15951 corp: 34/1722b lim: 85 exec/s: 52 rss: 75Mb L: 41/85 MS: 1 ChangeBinInt- 00:07:44.995 [2024-12-16 12:27:50.461526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.995 [2024-12-16 12:27:50.461553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.995 [2024-12-16 12:27:50.461597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.995 [2024-12-16 12:27:50.461618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.995 #53 NEW cov: 12479 ft: 15967 corp: 35/1768b lim: 85 exec/s: 26 rss: 75Mb L: 46/85 MS: 1 CrossOver- 00:07:44.995 #53 DONE cov: 12479 ft: 15967 corp: 35/1768b lim: 85 exec/s: 26 rss: 75Mb 00:07:44.995 ###### Recommended dictionary. ###### 00:07:44.995 "\000\004\000\000\000\000\000\000" # Uses: 3 00:07:44.995 "\001\005_\274i\216(\002" # Uses: 2 00:07:44.995 ###### End of recommended dictionary. ###### 00:07:44.995 Done 53 runs in 2 second(s) 00:07:45.255 12:27:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:45.255 12:27:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:45.255 12:27:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:45.255 12:27:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:45.255 12:27:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:45.255 12:27:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:45.255 12:27:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:45.255 12:27:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:45.255 12:27:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:45.255 12:27:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:45.255 12:27:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:45.255 12:27:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:45.255 12:27:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:45.255 12:27:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:45.255 12:27:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:45.255 12:27:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:45.255 12:27:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:45.255 12:27:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:45.255 12:27:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:45.255 [2024-12-16 12:27:50.636165] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:45.255 [2024-12-16 12:27:50.636217] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid994139 ] 00:07:45.255 [2024-12-16 12:27:50.815664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.515 [2024-12-16 12:27:50.849993] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.515 [2024-12-16 12:27:50.908987] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.515 [2024-12-16 12:27:50.925294] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:45.515 INFO: Running with entropic power schedule (0xFF, 100). 00:07:45.515 INFO: Seed: 3474745686 00:07:45.515 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:45.515 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:45.515 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:45.515 INFO: A corpus is not provided, starting from an empty corpus 00:07:45.515 #2 INITED exec/s: 0 rss: 65Mb 00:07:45.515 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:45.515 This may also happen if the target rejected all inputs we tried so far 00:07:45.515 [2024-12-16 12:27:50.984809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:45.515 [2024-12-16 12:27:50.984851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.774 NEW_FUNC[1/717]: 0x466658 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:45.774 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:45.774 #12 NEW cov: 12176 ft: 12166 corp: 2/7b lim: 25 exec/s: 0 rss: 72Mb L: 6/6 MS: 5 InsertByte-CopyPart-EraseBytes-InsertByte-CopyPart- 00:07:45.774 [2024-12-16 12:27:51.335667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:45.774 [2024-12-16 12:27:51.335720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.033 #13 NEW cov: 12289 ft: 12910 corp: 3/13b lim: 25 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:46.033 [2024-12-16 12:27:51.405852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.033 [2024-12-16 12:27:51.405885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.033 #14 NEW cov: 12295 ft: 13106 corp: 4/18b lim: 25 exec/s: 0 rss: 72Mb L: 5/6 MS: 1 EraseBytes- 00:07:46.034 [2024-12-16 12:27:51.455865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.034 [2024-12-16 12:27:51.455897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.034 #15 NEW cov: 12380 ft: 13410 corp: 5/23b lim: 25 exec/s: 0 rss: 72Mb L: 5/6 MS: 1 CopyPart- 00:07:46.034 [2024-12-16 12:27:51.516017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.034 [2024-12-16 12:27:51.516049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.034 #16 NEW cov: 12380 ft: 13470 corp: 6/29b lim: 25 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 ChangeByte- 00:07:46.034 [2024-12-16 12:27:51.576280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.034 [2024-12-16 12:27:51.576306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.293 #17 NEW cov: 12380 ft: 13502 corp: 7/35b lim: 25 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:46.293 [2024-12-16 12:27:51.636358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.293 [2024-12-16 12:27:51.636393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.293 #18 NEW cov: 12380 ft: 13528 corp: 8/40b lim: 25 exec/s: 0 rss: 73Mb L: 5/6 MS: 1 ChangeBit- 00:07:46.293 [2024-12-16 12:27:51.696518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.293 [2024-12-16 12:27:51.696549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.293 #20 NEW cov: 12380 ft: 13642 corp: 9/49b lim: 25 exec/s: 0 rss: 73Mb L: 9/9 MS: 2 ChangeBit-CMP- DE: "\373X \006\276_\005\000"- 00:07:46.293 [2024-12-16 12:27:51.736878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.293 [2024-12-16 12:27:51.736906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.293 [2024-12-16 12:27:51.737012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:46.293 [2024-12-16 12:27:51.737039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.293 [2024-12-16 12:27:51.737167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:46.293 [2024-12-16 12:27:51.737189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.293 #21 NEW cov: 12380 ft: 14060 corp: 10/64b lim: 25 exec/s: 0 rss: 73Mb L: 15/15 MS: 1 InsertRepeatedBytes- 00:07:46.293 [2024-12-16 12:27:51.797153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.293 [2024-12-16 12:27:51.797184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.293 [2024-12-16 12:27:51.797282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:46.293 [2024-12-16 12:27:51.797307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.293 [2024-12-16 12:27:51.797435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:46.293 [2024-12-16 12:27:51.797462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.293 #22 NEW cov: 12380 ft: 14109 corp: 11/79b lim: 25 exec/s: 0 rss: 73Mb L: 15/15 MS: 1 PersAutoDict- DE: "\373X \006\276_\005\000"- 00:07:46.293 [2024-12-16 12:27:51.857289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.293 [2024-12-16 12:27:51.857320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.552 [2024-12-16 12:27:51.857444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:46.552 [2024-12-16 12:27:51.857471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.552 [2024-12-16 12:27:51.857600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:46.552 [2024-12-16 12:27:51.857626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.552 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:46.552 #23 NEW cov: 12403 ft: 14138 corp: 12/98b lim: 25 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:07:46.552 [2024-12-16 12:27:51.917085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.552 [2024-12-16 12:27:51.917111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.552 #24 NEW cov: 12403 ft: 14178 corp: 13/104b lim: 25 exec/s: 0 rss: 73Mb L: 6/19 MS: 1 ShuffleBytes- 00:07:46.552 [2024-12-16 12:27:51.967590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.552 [2024-12-16 12:27:51.967627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.552 [2024-12-16 12:27:51.967739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:46.552 [2024-12-16 12:27:51.967764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.552 [2024-12-16 12:27:51.967901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:46.552 [2024-12-16 12:27:51.967927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.552 #25 NEW cov: 12403 ft: 14216 corp: 14/119b lim: 25 exec/s: 25 rss: 73Mb L: 15/19 MS: 1 CrossOver- 00:07:46.552 [2024-12-16 12:27:52.007308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.552 [2024-12-16 12:27:52.007340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.552 #26 NEW cov: 12403 ft: 14261 corp: 15/124b lim: 25 exec/s: 26 rss: 73Mb L: 5/19 MS: 1 ChangeByte- 00:07:46.552 [2024-12-16 12:27:52.047804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.552 [2024-12-16 12:27:52.047835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.552 [2024-12-16 12:27:52.047960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:46.553 [2024-12-16 12:27:52.047987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.553 [2024-12-16 12:27:52.048107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:46.553 [2024-12-16 12:27:52.048134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.553 #27 NEW cov: 12403 ft: 14276 corp: 16/139b lim: 25 exec/s: 27 rss: 73Mb L: 15/19 MS: 1 ChangeBinInt- 00:07:46.553 [2024-12-16 12:27:52.097620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.553 [2024-12-16 12:27:52.097645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.812 #28 NEW cov: 12403 ft: 14295 corp: 17/144b lim: 25 exec/s: 28 rss: 73Mb L: 5/19 MS: 1 ChangeBit- 00:07:46.812 [2024-12-16 12:27:52.137972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.812 [2024-12-16 12:27:52.138004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.812 [2024-12-16 12:27:52.138125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:46.812 [2024-12-16 12:27:52.138152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.812 #29 NEW cov: 12403 ft: 14515 corp: 18/154b lim: 25 exec/s: 29 rss: 73Mb L: 10/19 MS: 1 InsertByte- 00:07:46.812 [2024-12-16 12:27:52.187908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.812 [2024-12-16 12:27:52.187940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.812 #31 NEW cov: 12403 ft: 14539 corp: 19/160b lim: 25 exec/s: 31 rss: 73Mb L: 6/19 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:46.812 [2024-12-16 12:27:52.238050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.812 [2024-12-16 12:27:52.238076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.812 #32 NEW cov: 12403 ft: 14560 corp: 20/169b lim: 25 exec/s: 32 rss: 73Mb L: 9/19 MS: 1 ChangeByte- 00:07:46.812 [2024-12-16 12:27:52.288183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.812 [2024-12-16 12:27:52.288214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.812 #33 NEW cov: 12403 ft: 14589 corp: 21/174b lim: 25 exec/s: 33 rss: 73Mb L: 5/19 MS: 1 ShuffleBytes- 00:07:46.812 [2024-12-16 12:27:52.358367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.812 [2024-12-16 12:27:52.358396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.071 #34 NEW cov: 12403 ft: 14668 corp: 22/179b lim: 25 exec/s: 34 rss: 73Mb L: 5/19 MS: 1 CopyPart- 00:07:47.071 [2024-12-16 12:27:52.408548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:47.071 [2024-12-16 12:27:52.408579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.071 #40 NEW cov: 12403 ft: 14686 corp: 23/186b lim: 25 exec/s: 40 rss: 73Mb L: 7/19 MS: 1 InsertByte- 00:07:47.071 [2024-12-16 12:27:52.479002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:47.071 [2024-12-16 12:27:52.479032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.071 [2024-12-16 12:27:52.479148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:47.071 [2024-12-16 12:27:52.479174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.071 #41 NEW cov: 12403 ft: 14723 corp: 24/199b lim: 25 exec/s: 41 rss: 73Mb L: 13/19 MS: 1 PersAutoDict- DE: "\373X \006\276_\005\000"- 00:07:47.071 [2024-12-16 12:27:52.529669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:47.071 [2024-12-16 12:27:52.529705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.071 [2024-12-16 12:27:52.529797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:47.071 [2024-12-16 12:27:52.529823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.071 [2024-12-16 12:27:52.529944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:47.071 [2024-12-16 12:27:52.529970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.071 [2024-12-16 12:27:52.530095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:47.071 [2024-12-16 12:27:52.530120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:47.071 [2024-12-16 12:27:52.530254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:47.071 [2024-12-16 12:27:52.530280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:47.071 #42 NEW cov: 12403 ft: 15224 corp: 25/224b lim: 25 exec/s: 42 rss: 73Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:47.071 [2024-12-16 12:27:52.599349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:47.071 [2024-12-16 12:27:52.599378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.071 [2024-12-16 12:27:52.599509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:47.071 [2024-12-16 12:27:52.599529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.071 #43 NEW cov: 12403 ft: 15232 corp: 26/238b lim: 25 exec/s: 43 rss: 73Mb L: 14/25 MS: 1 PersAutoDict- DE: "\373X \006\276_\005\000"- 00:07:47.331 [2024-12-16 12:27:52.649232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:47.331 [2024-12-16 12:27:52.649265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.331 #44 NEW cov: 12403 ft: 15271 corp: 27/243b lim: 25 exec/s: 44 rss: 73Mb L: 5/25 MS: 1 ShuffleBytes- 00:07:47.331 [2024-12-16 12:27:52.719816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:47.331 [2024-12-16 12:27:52.719844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.331 [2024-12-16 12:27:52.719918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:47.331 [2024-12-16 12:27:52.719944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.331 [2024-12-16 12:27:52.720069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:47.331 [2024-12-16 12:27:52.720095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.331 #45 NEW cov: 12403 ft: 15287 corp: 28/262b lim: 25 exec/s: 45 rss: 74Mb L: 19/25 MS: 1 ChangeByte- 00:07:47.331 [2024-12-16 12:27:52.790237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:47.331 [2024-12-16 12:27:52.790267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.331 [2024-12-16 12:27:52.790356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:47.331 [2024-12-16 12:27:52.790382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.331 [2024-12-16 12:27:52.790506] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:47.331 [2024-12-16 12:27:52.790532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.331 [2024-12-16 12:27:52.790664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:47.331 [2024-12-16 12:27:52.790691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:47.331 #46 NEW cov: 12403 ft: 15302 corp: 29/286b lim: 25 exec/s: 46 rss: 74Mb L: 24/25 MS: 1 InsertRepeatedBytes- 00:07:47.331 [2024-12-16 12:27:52.840384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:47.331 [2024-12-16 12:27:52.840416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.331 [2024-12-16 12:27:52.840513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:47.331 [2024-12-16 12:27:52.840538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.331 [2024-12-16 12:27:52.840669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:47.331 [2024-12-16 12:27:52.840691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.331 [2024-12-16 12:27:52.840818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:47.331 [2024-12-16 12:27:52.840843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:47.331 #47 NEW cov: 12403 ft: 15346 corp: 30/307b lim: 25 exec/s: 47 rss: 74Mb L: 21/25 MS: 1 InsertRepeatedBytes- 00:07:47.331 [2024-12-16 12:27:52.880216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:47.331 [2024-12-16 12:27:52.880246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.331 [2024-12-16 12:27:52.880356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:47.331 [2024-12-16 12:27:52.880384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.591 #48 NEW cov: 12403 ft: 15379 corp: 31/321b lim: 25 exec/s: 48 rss: 74Mb L: 14/25 MS: 1 CrossOver- 00:07:47.591 [2024-12-16 12:27:52.940263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:47.591 [2024-12-16 12:27:52.940289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.591 #49 NEW cov: 12403 ft: 15428 corp: 32/328b lim: 25 exec/s: 24 rss: 74Mb L: 7/25 MS: 1 ChangeBit- 00:07:47.591 #49 DONE cov: 12403 ft: 15428 corp: 32/328b lim: 25 exec/s: 24 rss: 74Mb 00:07:47.591 ###### Recommended dictionary. ###### 00:07:47.591 "\373X \006\276_\005\000" # Uses: 3 00:07:47.591 ###### End of recommended dictionary. ###### 00:07:47.591 Done 49 runs in 2 second(s) 00:07:47.591 12:27:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:47.591 12:27:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:47.591 12:27:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:47.591 12:27:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:47.591 12:27:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:47.591 12:27:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:47.591 12:27:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:47.591 12:27:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:47.591 12:27:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:47.591 12:27:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:47.591 12:27:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:47.591 12:27:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:47.591 12:27:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:47.591 12:27:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:47.591 12:27:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:47.591 12:27:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:47.591 12:27:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:47.591 12:27:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:47.591 12:27:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:47.591 [2024-12-16 12:27:53.131897] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:47.591 [2024-12-16 12:27:53.131972] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid994674 ] 00:07:47.850 [2024-12-16 12:27:53.316082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.850 [2024-12-16 12:27:53.348805] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.850 [2024-12-16 12:27:53.407929] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.110 [2024-12-16 12:27:53.424200] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:48.110 INFO: Running with entropic power schedule (0xFF, 100). 00:07:48.110 INFO: Seed: 1678759613 00:07:48.110 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:48.110 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:48.110 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:48.110 INFO: A corpus is not provided, starting from an empty corpus 00:07:48.110 #2 INITED exec/s: 0 rss: 65Mb 00:07:48.110 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:48.110 This may also happen if the target rejected all inputs we tried so far 00:07:48.110 [2024-12-16 12:27:53.473378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.110 [2024-12-16 12:27:53.473407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.110 [2024-12-16 12:27:53.473452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.110 [2024-12-16 12:27:53.473470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.110 [2024-12-16 12:27:53.473526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.110 [2024-12-16 12:27:53.473541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.369 NEW_FUNC[1/718]: 0x467748 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:48.369 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:48.369 #14 NEW cov: 12241 ft: 12237 corp: 2/75b lim: 100 exec/s: 0 rss: 72Mb L: 74/74 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:48.369 [2024-12-16 12:27:53.804148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.369 [2024-12-16 12:27:53.804181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.369 [2024-12-16 12:27:53.804238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073693495295 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.369 [2024-12-16 12:27:53.804255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.369 [2024-12-16 12:27:53.804311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.369 [2024-12-16 12:27:53.804327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.369 #20 NEW cov: 12361 ft: 12801 corp: 3/135b lim: 100 exec/s: 0 rss: 72Mb L: 60/74 MS: 1 CrossOver- 00:07:48.369 [2024-12-16 12:27:53.864276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.369 [2024-12-16 12:27:53.864304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.369 [2024-12-16 12:27:53.864358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709488895 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.369 [2024-12-16 12:27:53.864376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.369 [2024-12-16 12:27:53.864430] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.369 [2024-12-16 12:27:53.864446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.369 #26 NEW cov: 12367 ft: 13045 corp: 4/196b lim: 100 exec/s: 0 rss: 72Mb L: 61/74 MS: 1 CrossOver- 00:07:48.370 [2024-12-16 12:27:53.924392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.370 [2024-12-16 12:27:53.924421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.370 [2024-12-16 12:27:53.924475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709488895 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.370 [2024-12-16 12:27:53.924493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.370 [2024-12-16 12:27:53.924550] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:261993005056 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.370 [2024-12-16 12:27:53.924566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.630 #27 NEW cov: 12452 ft: 13263 corp: 5/257b lim: 100 exec/s: 0 rss: 72Mb L: 61/74 MS: 1 ChangeBinInt- 00:07:48.630 [2024-12-16 12:27:53.984572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:1029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.630 [2024-12-16 12:27:53.984600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.630 [2024-12-16 12:27:53.984652] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744069481958143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.630 [2024-12-16 12:27:53.984670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.630 [2024-12-16 12:27:53.984726] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.630 [2024-12-16 12:27:53.984746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.630 #33 NEW cov: 12452 ft: 13499 corp: 6/332b lim: 100 exec/s: 0 rss: 72Mb L: 75/75 MS: 1 InsertRepeatedBytes- 00:07:48.630 [2024-12-16 12:27:54.024693] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.630 [2024-12-16 12:27:54.024721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.630 [2024-12-16 12:27:54.024758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073693495295 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.630 [2024-12-16 12:27:54.024774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.630 [2024-12-16 12:27:54.024831] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.630 [2024-12-16 12:27:54.024848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.630 #34 NEW cov: 12452 ft: 13572 corp: 7/393b lim: 100 exec/s: 0 rss: 72Mb L: 61/75 MS: 1 InsertByte- 00:07:48.630 [2024-12-16 12:27:54.064800] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.630 [2024-12-16 12:27:54.064827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.630 [2024-12-16 12:27:54.064873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.630 [2024-12-16 12:27:54.064889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.630 [2024-12-16 12:27:54.064945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.630 [2024-12-16 12:27:54.064963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.630 #35 NEW cov: 12452 ft: 13648 corp: 8/459b lim: 100 exec/s: 0 rss: 72Mb L: 66/75 MS: 1 EraseBytes- 00:07:48.630 [2024-12-16 12:27:54.104911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.630 [2024-12-16 12:27:54.104937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.630 [2024-12-16 12:27:54.105001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744070085610239 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.630 [2024-12-16 12:27:54.105017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.630 [2024-12-16 12:27:54.105072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.630 [2024-12-16 12:27:54.105088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.630 #36 NEW cov: 12452 ft: 13676 corp: 9/521b lim: 100 exec/s: 0 rss: 72Mb L: 62/75 MS: 1 InsertByte- 00:07:48.630 [2024-12-16 12:27:54.165080] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.630 [2024-12-16 12:27:54.165107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.630 [2024-12-16 12:27:54.165147] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.630 [2024-12-16 12:27:54.165165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.630 [2024-12-16 12:27:54.165222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.630 [2024-12-16 12:27:54.165239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.630 #37 NEW cov: 12452 ft: 13684 corp: 10/596b lim: 100 exec/s: 0 rss: 72Mb L: 75/75 MS: 1 InsertByte- 00:07:48.890 [2024-12-16 12:27:54.205152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709488640 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.890 [2024-12-16 12:27:54.205179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.890 [2024-12-16 12:27:54.205223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709488895 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.890 [2024-12-16 12:27:54.205238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.890 [2024-12-16 12:27:54.205292] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:261993005056 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.890 [2024-12-16 12:27:54.205307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.890 #38 NEW cov: 12452 ft: 13769 corp: 11/657b lim: 100 exec/s: 0 rss: 72Mb L: 61/75 MS: 1 ChangeBinInt- 00:07:48.890 [2024-12-16 12:27:54.265362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:1029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.890 [2024-12-16 12:27:54.265389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.890 [2024-12-16 12:27:54.265433] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744069481958143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.890 [2024-12-16 12:27:54.265450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.890 [2024-12-16 12:27:54.265504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18374686479671623679 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.890 [2024-12-16 12:27:54.265521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.890 #39 NEW cov: 12452 ft: 13804 corp: 12/732b lim: 100 exec/s: 0 rss: 73Mb L: 75/75 MS: 1 ChangeBinInt- 00:07:48.890 [2024-12-16 12:27:54.325501] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.890 [2024-12-16 12:27:54.325528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.890 [2024-12-16 12:27:54.325574] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709488895 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.890 [2024-12-16 12:27:54.325591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.890 [2024-12-16 12:27:54.325648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.890 [2024-12-16 12:27:54.325680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.890 #40 NEW cov: 12452 ft: 13832 corp: 13/793b lim: 100 exec/s: 0 rss: 73Mb L: 61/75 MS: 1 ChangeBit- 00:07:48.890 [2024-12-16 12:27:54.365777] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709488640 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.890 [2024-12-16 12:27:54.365804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.890 [2024-12-16 12:27:54.365856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709488895 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.890 [2024-12-16 12:27:54.365873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.890 [2024-12-16 12:27:54.365926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:261993005056 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.890 [2024-12-16 12:27:54.365942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.890 [2024-12-16 12:27:54.365997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13527612320720337851 len:48060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.890 [2024-12-16 12:27:54.366015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:48.890 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:48.890 #41 NEW cov: 12475 ft: 14220 corp: 14/877b lim: 100 exec/s: 0 rss: 73Mb L: 84/84 MS: 1 InsertRepeatedBytes- 00:07:48.890 [2024-12-16 12:27:54.425822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.890 [2024-12-16 12:27:54.425849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.890 [2024-12-16 12:27:54.425896] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073693495295 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.890 [2024-12-16 12:27:54.425913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.890 [2024-12-16 12:27:54.425967] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.890 [2024-12-16 12:27:54.425983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:48.890 #42 NEW cov: 12475 ft: 14245 corp: 15/937b lim: 100 exec/s: 0 rss: 73Mb L: 60/84 MS: 1 ChangeBinInt- 00:07:49.150 [2024-12-16 12:27:54.465921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.150 [2024-12-16 12:27:54.465948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.150 [2024-12-16 12:27:54.465994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744070085610239 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.150 [2024-12-16 12:27:54.466010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.150 [2024-12-16 12:27:54.466067] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.150 [2024-12-16 12:27:54.466083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.150 #43 NEW cov: 12475 ft: 14274 corp: 16/999b lim: 100 exec/s: 43 rss: 73Mb L: 62/84 MS: 1 ChangeBinInt- 00:07:49.150 [2024-12-16 12:27:54.525950] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.150 [2024-12-16 12:27:54.525980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.150 [2024-12-16 12:27:54.526022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.150 [2024-12-16 12:27:54.526038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.150 #44 NEW cov: 12475 ft: 14627 corp: 17/1058b lim: 100 exec/s: 44 rss: 73Mb L: 59/84 MS: 1 EraseBytes- 00:07:49.150 [2024-12-16 12:27:54.566062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:1029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.150 [2024-12-16 12:27:54.566089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.150 [2024-12-16 12:27:54.566145] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744069481958143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.150 [2024-12-16 12:27:54.566160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.150 #45 NEW cov: 12475 ft: 14667 corp: 18/1100b lim: 100 exec/s: 45 rss: 73Mb L: 42/84 MS: 1 EraseBytes- 00:07:49.150 [2024-12-16 12:27:54.606317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:1029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.150 [2024-12-16 12:27:54.606344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.150 [2024-12-16 12:27:54.606408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744069481958143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.150 [2024-12-16 12:27:54.606426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.150 [2024-12-16 12:27:54.606482] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073695592447 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.150 [2024-12-16 12:27:54.606497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.150 #46 NEW cov: 12475 ft: 14757 corp: 19/1175b lim: 100 exec/s: 46 rss: 73Mb L: 75/84 MS: 1 ChangeByte- 00:07:49.150 [2024-12-16 12:27:54.646282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18437736874454810623 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.150 [2024-12-16 12:27:54.646308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.150 [2024-12-16 12:27:54.646369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.150 [2024-12-16 12:27:54.646386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.150 #47 NEW cov: 12475 ft: 14833 corp: 20/1234b lim: 100 exec/s: 47 rss: 73Mb L: 59/84 MS: 1 ChangeBit- 00:07:49.150 [2024-12-16 12:27:54.706789] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.150 [2024-12-16 12:27:54.706817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.150 [2024-12-16 12:27:54.706880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65291 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.150 [2024-12-16 12:27:54.706895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.150 [2024-12-16 12:27:54.706953] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18377782704415440895 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.150 [2024-12-16 12:27:54.706970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.150 [2024-12-16 12:27:54.707022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.150 [2024-12-16 12:27:54.707039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:49.410 #48 NEW cov: 12475 ft: 14856 corp: 21/1318b lim: 100 exec/s: 48 rss: 73Mb L: 84/84 MS: 1 CrossOver- 00:07:49.410 [2024-12-16 12:27:54.746699] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.410 [2024-12-16 12:27:54.746726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.410 [2024-12-16 12:27:54.746789] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709488895 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.410 [2024-12-16 12:27:54.746805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.410 [2024-12-16 12:27:54.746860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.410 [2024-12-16 12:27:54.746877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.410 #49 NEW cov: 12475 ft: 14865 corp: 22/1380b lim: 100 exec/s: 49 rss: 73Mb L: 62/84 MS: 1 InsertByte- 00:07:49.410 [2024-12-16 12:27:54.786839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:1029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.410 [2024-12-16 12:27:54.786865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.410 [2024-12-16 12:27:54.786920] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744069481958143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.410 [2024-12-16 12:27:54.786936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.410 [2024-12-16 12:27:54.786992] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709497087 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.410 [2024-12-16 12:27:54.787010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.410 #50 NEW cov: 12475 ft: 14932 corp: 23/1456b lim: 100 exec/s: 50 rss: 73Mb L: 76/84 MS: 1 CopyPart- 00:07:49.410 [2024-12-16 12:27:54.847009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.410 [2024-12-16 12:27:54.847036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.410 [2024-12-16 12:27:54.847098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709488895 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.410 [2024-12-16 12:27:54.847115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.410 [2024-12-16 12:27:54.847170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.410 [2024-12-16 12:27:54.847190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.410 #51 NEW cov: 12475 ft: 14948 corp: 24/1518b lim: 100 exec/s: 51 rss: 73Mb L: 62/84 MS: 1 InsertByte- 00:07:49.410 [2024-12-16 12:27:54.887119] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.410 [2024-12-16 12:27:54.887146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.410 [2024-12-16 12:27:54.887210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446496687872212991 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.410 [2024-12-16 12:27:54.887226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.410 [2024-12-16 12:27:54.887282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.410 [2024-12-16 12:27:54.887299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.410 #52 NEW cov: 12475 ft: 14969 corp: 25/1580b lim: 100 exec/s: 52 rss: 73Mb L: 62/84 MS: 1 CMP- DE: "\037\000"- 00:07:49.410 [2024-12-16 12:27:54.927241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.410 [2024-12-16 12:27:54.927268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.410 [2024-12-16 12:27:54.927331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709488895 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.410 [2024-12-16 12:27:54.927349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.410 [2024-12-16 12:27:54.927406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446743266255699967 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.410 [2024-12-16 12:27:54.927423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.410 #53 NEW cov: 12475 ft: 14986 corp: 26/1642b lim: 100 exec/s: 53 rss: 73Mb L: 62/84 MS: 1 InsertByte- 00:07:49.669 [2024-12-16 12:27:54.987419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:1029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.669 [2024-12-16 12:27:54.987446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.669 [2024-12-16 12:27:54.987511] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:291326598958547972 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.669 [2024-12-16 12:27:54.987527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.669 [2024-12-16 12:27:54.987583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.669 [2024-12-16 12:27:54.987600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.669 #54 NEW cov: 12475 ft: 15002 corp: 27/1720b lim: 100 exec/s: 54 rss: 73Mb L: 78/84 MS: 1 InsertRepeatedBytes- 00:07:49.669 [2024-12-16 12:27:55.027500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.669 [2024-12-16 12:27:55.027528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.669 [2024-12-16 12:27:55.027594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18425351975479541759 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.669 [2024-12-16 12:27:55.027618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.669 [2024-12-16 12:27:55.027672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.669 [2024-12-16 12:27:55.027687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.669 #55 NEW cov: 12475 ft: 15051 corp: 28/1782b lim: 100 exec/s: 55 rss: 73Mb L: 62/84 MS: 1 CopyPart- 00:07:49.669 [2024-12-16 12:27:55.087835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:1029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.669 [2024-12-16 12:27:55.087862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.669 [2024-12-16 12:27:55.087912] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744069481958143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.669 [2024-12-16 12:27:55.087930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.669 [2024-12-16 12:27:55.087985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.669 [2024-12-16 12:27:55.088001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.669 [2024-12-16 12:27:55.088057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.669 [2024-12-16 12:27:55.088073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:49.669 #56 NEW cov: 12475 ft: 15085 corp: 29/1867b lim: 100 exec/s: 56 rss: 73Mb L: 85/85 MS: 1 CopyPart- 00:07:49.669 [2024-12-16 12:27:55.127823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:1029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.669 [2024-12-16 12:27:55.127850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.669 [2024-12-16 12:27:55.127909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:291326598958547972 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.669 [2024-12-16 12:27:55.127926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.669 [2024-12-16 12:27:55.127982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.669 [2024-12-16 12:27:55.127998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.669 #57 NEW cov: 12475 ft: 15127 corp: 30/1946b lim: 100 exec/s: 57 rss: 73Mb L: 79/85 MS: 1 InsertByte- 00:07:49.669 [2024-12-16 12:27:55.187982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.669 [2024-12-16 12:27:55.188009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.669 [2024-12-16 12:27:55.188074] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709488895 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.669 [2024-12-16 12:27:55.188091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.669 [2024-12-16 12:27:55.188151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446743266255699967 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.669 [2024-12-16 12:27:55.188168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.669 #58 NEW cov: 12475 ft: 15139 corp: 31/2008b lim: 100 exec/s: 58 rss: 74Mb L: 62/85 MS: 1 ChangeBit- 00:07:49.928 [2024-12-16 12:27:55.248152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.928 [2024-12-16 12:27:55.248180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.928 [2024-12-16 12:27:55.248223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.928 [2024-12-16 12:27:55.248240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.928 [2024-12-16 12:27:55.248296] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.928 [2024-12-16 12:27:55.248313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.928 #59 NEW cov: 12475 ft: 15164 corp: 32/2082b lim: 100 exec/s: 59 rss: 74Mb L: 74/85 MS: 1 PersAutoDict- DE: "\037\000"- 00:07:49.928 [2024-12-16 12:27:55.288271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.928 [2024-12-16 12:27:55.288298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.928 [2024-12-16 12:27:55.288353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709488895 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.929 [2024-12-16 12:27:55.288371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.929 [2024-12-16 12:27:55.288426] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.929 [2024-12-16 12:27:55.288443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.929 #60 NEW cov: 12475 ft: 15168 corp: 33/2145b lim: 100 exec/s: 60 rss: 74Mb L: 63/85 MS: 1 InsertByte- 00:07:49.929 [2024-12-16 12:27:55.348456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709488640 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.929 [2024-12-16 12:27:55.348483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.929 [2024-12-16 12:27:55.348521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709488895 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.929 [2024-12-16 12:27:55.348537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.929 [2024-12-16 12:27:55.348592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:261993005056 len:8704 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.929 [2024-12-16 12:27:55.348615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.929 #61 NEW cov: 12475 ft: 15178 corp: 34/2206b lim: 100 exec/s: 61 rss: 74Mb L: 61/85 MS: 1 ChangeByte- 00:07:49.929 [2024-12-16 12:27:55.388531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:1029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.929 [2024-12-16 12:27:55.388561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.929 [2024-12-16 12:27:55.388620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744069481958143 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.929 [2024-12-16 12:27:55.388638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.929 [2024-12-16 12:27:55.388695] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709497087 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.929 [2024-12-16 12:27:55.388711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.929 #62 NEW cov: 12475 ft: 15193 corp: 35/2282b lim: 100 exec/s: 62 rss: 74Mb L: 76/85 MS: 1 CrossOver- 00:07:49.929 [2024-12-16 12:27:55.448859] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.929 [2024-12-16 12:27:55.448886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.929 [2024-12-16 12:27:55.448956] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709488895 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.929 [2024-12-16 12:27:55.448971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.929 [2024-12-16 12:27:55.449026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.929 [2024-12-16 12:27:55.449041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.929 [2024-12-16 12:27:55.449098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.929 [2024-12-16 12:27:55.449114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:49.929 #63 NEW cov: 12475 ft: 15221 corp: 36/2365b lim: 100 exec/s: 31 rss: 74Mb L: 83/85 MS: 1 InsertRepeatedBytes- 00:07:49.929 #63 DONE cov: 12475 ft: 15221 corp: 36/2365b lim: 100 exec/s: 31 rss: 74Mb 00:07:49.929 ###### Recommended dictionary. ###### 00:07:49.929 "\037\000" # Uses: 1 00:07:49.929 ###### End of recommended dictionary. ###### 00:07:49.929 Done 63 runs in 2 second(s) 00:07:50.188 12:27:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:50.188 12:27:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:50.188 12:27:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:50.188 12:27:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:50.188 00:07:50.188 real 1m3.882s 00:07:50.188 user 1m40.088s 00:07:50.188 sys 0m7.579s 00:07:50.188 12:27:55 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.188 12:27:55 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:50.188 ************************************ 00:07:50.188 END TEST nvmf_llvm_fuzz 00:07:50.188 ************************************ 00:07:50.188 12:27:55 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:50.188 12:27:55 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:50.188 12:27:55 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:50.188 12:27:55 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.188 12:27:55 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.188 12:27:55 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:50.188 ************************************ 00:07:50.188 START TEST vfio_llvm_fuzz 00:07:50.188 ************************************ 00:07:50.188 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:50.450 * Looking for test storage... 00:07:50.450 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:50.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.450 --rc genhtml_branch_coverage=1 00:07:50.450 --rc genhtml_function_coverage=1 00:07:50.450 --rc genhtml_legend=1 00:07:50.450 --rc geninfo_all_blocks=1 00:07:50.450 --rc geninfo_unexecuted_blocks=1 00:07:50.450 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:50.450 ' 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:50.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.450 --rc genhtml_branch_coverage=1 00:07:50.450 --rc genhtml_function_coverage=1 00:07:50.450 --rc genhtml_legend=1 00:07:50.450 --rc geninfo_all_blocks=1 00:07:50.450 --rc geninfo_unexecuted_blocks=1 00:07:50.450 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:50.450 ' 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:50.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.450 --rc genhtml_branch_coverage=1 00:07:50.450 --rc genhtml_function_coverage=1 00:07:50.450 --rc genhtml_legend=1 00:07:50.450 --rc geninfo_all_blocks=1 00:07:50.450 --rc geninfo_unexecuted_blocks=1 00:07:50.450 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:50.450 ' 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:50.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.450 --rc genhtml_branch_coverage=1 00:07:50.450 --rc genhtml_function_coverage=1 00:07:50.450 --rc genhtml_legend=1 00:07:50.450 --rc geninfo_all_blocks=1 00:07:50.450 --rc geninfo_unexecuted_blocks=1 00:07:50.450 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:50.450 ' 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:50.450 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:50.451 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:50.451 #define SPDK_CONFIG_H 00:07:50.451 #define SPDK_CONFIG_AIO_FSDEV 1 00:07:50.451 #define SPDK_CONFIG_APPS 1 00:07:50.451 #define SPDK_CONFIG_ARCH native 00:07:50.451 #undef SPDK_CONFIG_ASAN 00:07:50.451 #undef SPDK_CONFIG_AVAHI 00:07:50.451 #undef SPDK_CONFIG_CET 00:07:50.451 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:07:50.451 #define SPDK_CONFIG_COVERAGE 1 00:07:50.451 #define SPDK_CONFIG_CROSS_PREFIX 00:07:50.451 #undef SPDK_CONFIG_CRYPTO 00:07:50.451 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:50.451 #undef SPDK_CONFIG_CUSTOMOCF 00:07:50.451 #undef SPDK_CONFIG_DAOS 00:07:50.451 #define SPDK_CONFIG_DAOS_DIR 00:07:50.451 #define SPDK_CONFIG_DEBUG 1 00:07:50.451 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:50.451 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:50.451 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:50.451 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:50.451 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:50.451 #undef SPDK_CONFIG_DPDK_UADK 00:07:50.451 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:50.451 #define SPDK_CONFIG_EXAMPLES 1 00:07:50.451 #undef SPDK_CONFIG_FC 00:07:50.451 #define SPDK_CONFIG_FC_PATH 00:07:50.451 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:50.451 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:50.451 #define SPDK_CONFIG_FSDEV 1 00:07:50.451 #undef SPDK_CONFIG_FUSE 00:07:50.451 #define SPDK_CONFIG_FUZZER 1 00:07:50.451 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:50.451 #undef SPDK_CONFIG_GOLANG 00:07:50.451 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:50.451 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:50.451 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:50.451 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:50.451 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:50.451 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:50.451 #undef SPDK_CONFIG_HAVE_LZ4 00:07:50.452 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:07:50.452 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:07:50.452 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:50.452 #define SPDK_CONFIG_IDXD 1 00:07:50.452 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:50.452 #undef SPDK_CONFIG_IPSEC_MB 00:07:50.452 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:50.452 #define SPDK_CONFIG_ISAL 1 00:07:50.452 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:50.452 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:50.452 #define SPDK_CONFIG_LIBDIR 00:07:50.452 #undef SPDK_CONFIG_LTO 00:07:50.452 #define SPDK_CONFIG_MAX_LCORES 128 00:07:50.452 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:07:50.452 #define SPDK_CONFIG_NVME_CUSE 1 00:07:50.452 #undef SPDK_CONFIG_OCF 00:07:50.452 #define SPDK_CONFIG_OCF_PATH 00:07:50.452 #define SPDK_CONFIG_OPENSSL_PATH 00:07:50.452 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:50.452 #define SPDK_CONFIG_PGO_DIR 00:07:50.452 #undef SPDK_CONFIG_PGO_USE 00:07:50.452 #define SPDK_CONFIG_PREFIX /usr/local 00:07:50.452 #undef SPDK_CONFIG_RAID5F 00:07:50.452 #undef SPDK_CONFIG_RBD 00:07:50.452 #define SPDK_CONFIG_RDMA 1 00:07:50.452 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:50.452 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:50.452 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:50.452 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:50.452 #undef SPDK_CONFIG_SHARED 00:07:50.452 #undef SPDK_CONFIG_SMA 00:07:50.452 #define SPDK_CONFIG_TESTS 1 00:07:50.452 #undef SPDK_CONFIG_TSAN 00:07:50.452 #define SPDK_CONFIG_UBLK 1 00:07:50.452 #define SPDK_CONFIG_UBSAN 1 00:07:50.452 #undef SPDK_CONFIG_UNIT_TESTS 00:07:50.452 #undef SPDK_CONFIG_URING 00:07:50.452 #define SPDK_CONFIG_URING_PATH 00:07:50.452 #undef SPDK_CONFIG_URING_ZNS 00:07:50.452 #undef SPDK_CONFIG_USDT 00:07:50.452 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:50.452 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:50.452 #define SPDK_CONFIG_VFIO_USER 1 00:07:50.452 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:50.452 #define SPDK_CONFIG_VHOST 1 00:07:50.452 #define SPDK_CONFIG_VIRTIO 1 00:07:50.452 #undef SPDK_CONFIG_VTUNE 00:07:50.452 #define SPDK_CONFIG_VTUNE_DIR 00:07:50.452 #define SPDK_CONFIG_WERROR 1 00:07:50.452 #define SPDK_CONFIG_WPDK_DIR 00:07:50.452 #undef SPDK_CONFIG_XNVME 00:07:50.452 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:50.452 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:07:50.453 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 995097 ]] 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 995097 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:07:50.454 12:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:07:50.454 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.fX2WJl 00:07:50.454 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:50.454 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:07:50.454 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:07:50.454 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.fX2WJl/tests/vfio /tmp/spdk.fX2WJl 00:07:50.454 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:07:50.454 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:50.454 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:07:50.454 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=785162240 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=4499267584 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=54194364416 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730586624 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=7536222208 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30861864960 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865293312 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=12340117504 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346118144 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=6000640 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30865096704 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865293312 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=196608 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=6173044736 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173057024 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:07:50.715 * Looking for test storage... 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=54194364416 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=9750814720 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:50.715 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1698 -- # set -o errtrace 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1703 -- # true 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # xtrace_fd 00:07:50.715 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:50.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.716 --rc genhtml_branch_coverage=1 00:07:50.716 --rc genhtml_function_coverage=1 00:07:50.716 --rc genhtml_legend=1 00:07:50.716 --rc geninfo_all_blocks=1 00:07:50.716 --rc geninfo_unexecuted_blocks=1 00:07:50.716 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:50.716 ' 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:50.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.716 --rc genhtml_branch_coverage=1 00:07:50.716 --rc genhtml_function_coverage=1 00:07:50.716 --rc genhtml_legend=1 00:07:50.716 --rc geninfo_all_blocks=1 00:07:50.716 --rc geninfo_unexecuted_blocks=1 00:07:50.716 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:50.716 ' 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:50.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.716 --rc genhtml_branch_coverage=1 00:07:50.716 --rc genhtml_function_coverage=1 00:07:50.716 --rc genhtml_legend=1 00:07:50.716 --rc geninfo_all_blocks=1 00:07:50.716 --rc geninfo_unexecuted_blocks=1 00:07:50.716 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:50.716 ' 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:50.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.716 --rc genhtml_branch_coverage=1 00:07:50.716 --rc genhtml_function_coverage=1 00:07:50.716 --rc genhtml_legend=1 00:07:50.716 --rc geninfo_all_blocks=1 00:07:50.716 --rc geninfo_unexecuted_blocks=1 00:07:50.716 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:50.716 ' 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:50.716 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:50.716 12:27:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:50.716 [2024-12-16 12:27:56.188127] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:50.716 [2024-12-16 12:27:56.188215] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995299 ] 00:07:50.716 [2024-12-16 12:27:56.268782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.976 [2024-12-16 12:27:56.310532] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.976 INFO: Running with entropic power schedule (0xFF, 100). 00:07:50.976 INFO: Seed: 440808571 00:07:50.976 INFO: Loaded 1 modules (388184 inline 8-bit counters): 388184 [0x2c4b80c, 0x2caa464), 00:07:50.976 INFO: Loaded 1 PC tables (388184 PCs): 388184 [0x2caa468,0x32969e8), 00:07:50.976 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:50.976 INFO: A corpus is not provided, starting from an empty corpus 00:07:50.976 #2 INITED exec/s: 0 rss: 67Mb 00:07:50.976 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:50.976 This may also happen if the target rejected all inputs we tried so far 00:07:51.235 [2024-12-16 12:27:56.550583] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:51.493 NEW_FUNC[1/675]: 0x43b608 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:51.493 NEW_FUNC[2/675]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:51.493 #31 NEW cov: 11187 ft: 11216 corp: 2/7b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 4 InsertByte-CopyPart-CopyPart-InsertByte- 00:07:51.752 NEW_FUNC[1/1]: 0x18e5d08 in nvme_pcie_qpair_submit_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_pcie_common.c:1668 00:07:51.752 #32 NEW cov: 11268 ft: 14607 corp: 3/13b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 ChangeByte- 00:07:52.011 NEW_FUNC[1/1]: 0x1c24b08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:52.011 #33 NEW cov: 11285 ft: 15725 corp: 4/19b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 1 CrossOver- 00:07:52.011 #34 NEW cov: 11285 ft: 16603 corp: 5/25b lim: 6 exec/s: 34 rss: 76Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:52.270 #35 NEW cov: 11285 ft: 17011 corp: 6/31b lim: 6 exec/s: 35 rss: 76Mb L: 6/6 MS: 1 CopyPart- 00:07:52.529 #36 NEW cov: 11285 ft: 17365 corp: 7/37b lim: 6 exec/s: 36 rss: 76Mb L: 6/6 MS: 1 ChangeByte- 00:07:52.529 #37 NEW cov: 11285 ft: 18030 corp: 8/43b lim: 6 exec/s: 37 rss: 76Mb L: 6/6 MS: 1 CopyPart- 00:07:52.788 #38 NEW cov: 11285 ft: 18188 corp: 9/49b lim: 6 exec/s: 38 rss: 76Mb L: 6/6 MS: 1 ChangeByte- 00:07:53.047 #39 NEW cov: 11292 ft: 18316 corp: 10/55b lim: 6 exec/s: 39 rss: 76Mb L: 6/6 MS: 1 ChangeBit- 00:07:53.306 #40 NEW cov: 11292 ft: 18409 corp: 11/61b lim: 6 exec/s: 20 rss: 77Mb L: 6/6 MS: 1 CopyPart- 00:07:53.306 #40 DONE cov: 11292 ft: 18409 corp: 11/61b lim: 6 exec/s: 20 rss: 77Mb 00:07:53.306 Done 40 runs in 2 second(s) 00:07:53.306 [2024-12-16 12:27:58.631806] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:53.306 12:27:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:53.306 12:27:58 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:53.306 12:27:58 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:53.306 12:27:58 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:53.306 12:27:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:53.306 12:27:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:53.306 12:27:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:53.306 12:27:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:53.306 12:27:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:53.306 12:27:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:53.306 12:27:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:53.306 12:27:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:53.306 12:27:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:53.306 12:27:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:53.306 12:27:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:53.306 12:27:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:53.306 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:53.566 12:27:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:53.566 12:27:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:53.566 12:27:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:53.566 [2024-12-16 12:27:58.901747] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:53.566 [2024-12-16 12:27:58.901821] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995715 ] 00:07:53.566 [2024-12-16 12:27:58.983777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.566 [2024-12-16 12:27:59.023924] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.825 INFO: Running with entropic power schedule (0xFF, 100). 00:07:53.825 INFO: Seed: 3159831275 00:07:53.825 INFO: Loaded 1 modules (388184 inline 8-bit counters): 388184 [0x2c4b80c, 0x2caa464), 00:07:53.825 INFO: Loaded 1 PC tables (388184 PCs): 388184 [0x2caa468,0x32969e8), 00:07:53.825 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:53.825 INFO: A corpus is not provided, starting from an empty corpus 00:07:53.825 #2 INITED exec/s: 0 rss: 68Mb 00:07:53.825 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:53.825 This may also happen if the target rejected all inputs we tried so far 00:07:53.825 [2024-12-16 12:27:59.268287] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:53.825 [2024-12-16 12:27:59.313715] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:53.825 [2024-12-16 12:27:59.313741] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:53.825 [2024-12-16 12:27:59.313759] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:54.342 NEW_FUNC[1/678]: 0x43bba8 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:54.342 NEW_FUNC[2/678]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:54.342 #10 NEW cov: 11251 ft: 11139 corp: 2/5b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 3 InsertByte-CopyPart-InsertByte- 00:07:54.342 [2024-12-16 12:27:59.771568] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:54.342 [2024-12-16 12:27:59.771600] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:54.342 [2024-12-16 12:27:59.771625] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:54.342 #11 NEW cov: 11265 ft: 13820 corp: 3/9b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 ChangeByte- 00:07:54.601 [2024-12-16 12:27:59.946842] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:54.601 [2024-12-16 12:27:59.946863] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:54.601 [2024-12-16 12:27:59.946881] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:54.601 NEW_FUNC[1/1]: 0x1c24b08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:54.601 #12 NEW cov: 11282 ft: 13986 corp: 4/13b lim: 4 exec/s: 0 rss: 76Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:54.601 [2024-12-16 12:28:00.125707] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:54.601 [2024-12-16 12:28:00.125733] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:54.601 [2024-12-16 12:28:00.125751] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:54.860 #13 NEW cov: 11282 ft: 14782 corp: 5/17b lim: 4 exec/s: 0 rss: 76Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:54.860 [2024-12-16 12:28:00.306870] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:54.860 [2024-12-16 12:28:00.306893] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:54.860 [2024-12-16 12:28:00.306909] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:54.860 #14 NEW cov: 11282 ft: 16384 corp: 6/21b lim: 4 exec/s: 14 rss: 76Mb L: 4/4 MS: 1 CrossOver- 00:07:55.119 [2024-12-16 12:28:00.494869] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:55.119 [2024-12-16 12:28:00.494891] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:55.119 [2024-12-16 12:28:00.494908] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:55.119 #20 NEW cov: 11282 ft: 16807 corp: 7/25b lim: 4 exec/s: 20 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:07:55.119 [2024-12-16 12:28:00.669444] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:55.119 [2024-12-16 12:28:00.669467] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:55.119 [2024-12-16 12:28:00.669483] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:55.377 #21 NEW cov: 11282 ft: 17251 corp: 8/29b lim: 4 exec/s: 21 rss: 76Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:55.377 [2024-12-16 12:28:00.844213] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:55.377 [2024-12-16 12:28:00.844234] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:55.377 [2024-12-16 12:28:00.844252] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:55.636 #22 NEW cov: 11282 ft: 17425 corp: 9/33b lim: 4 exec/s: 22 rss: 76Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:55.636 [2024-12-16 12:28:01.016713] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:55.636 [2024-12-16 12:28:01.016734] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:55.636 [2024-12-16 12:28:01.016751] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:55.636 #34 NEW cov: 11289 ft: 17740 corp: 10/37b lim: 4 exec/s: 34 rss: 76Mb L: 4/4 MS: 2 EraseBytes-InsertByte- 00:07:55.636 [2024-12-16 12:28:01.189100] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:55.636 [2024-12-16 12:28:01.189121] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:55.636 [2024-12-16 12:28:01.189137] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:55.895 #35 NEW cov: 11289 ft: 17969 corp: 11/41b lim: 4 exec/s: 17 rss: 76Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:55.895 #35 DONE cov: 11289 ft: 17969 corp: 11/41b lim: 4 exec/s: 17 rss: 76Mb 00:07:55.895 Done 35 runs in 2 second(s) 00:07:55.895 [2024-12-16 12:28:01.314798] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:56.153 12:28:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:56.153 12:28:01 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:56.153 12:28:01 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:56.153 12:28:01 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:56.153 12:28:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:56.153 12:28:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:56.153 12:28:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:56.153 12:28:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:56.153 12:28:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:56.153 12:28:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:56.153 12:28:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:56.153 12:28:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:56.153 12:28:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:56.153 12:28:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:56.153 12:28:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:56.153 12:28:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:56.153 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:56.153 12:28:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:56.153 12:28:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:56.153 12:28:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:56.153 [2024-12-16 12:28:01.578723] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:56.153 [2024-12-16 12:28:01.578812] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996122 ] 00:07:56.153 [2024-12-16 12:28:01.660269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.153 [2024-12-16 12:28:01.699988] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.412 INFO: Running with entropic power schedule (0xFF, 100). 00:07:56.412 INFO: Seed: 1538858330 00:07:56.412 INFO: Loaded 1 modules (388184 inline 8-bit counters): 388184 [0x2c4b80c, 0x2caa464), 00:07:56.412 INFO: Loaded 1 PC tables (388184 PCs): 388184 [0x2caa468,0x32969e8), 00:07:56.412 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:56.412 INFO: A corpus is not provided, starting from an empty corpus 00:07:56.412 #2 INITED exec/s: 0 rss: 67Mb 00:07:56.412 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:56.412 This may also happen if the target rejected all inputs we tried so far 00:07:56.412 [2024-12-16 12:28:01.951017] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:56.670 [2024-12-16 12:28:01.990523] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:56.929 NEW_FUNC[1/677]: 0x43c598 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:56.929 NEW_FUNC[2/677]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:56.929 #46 NEW cov: 11227 ft: 11081 corp: 2/9b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 4 InsertByte-CrossOver-InsertRepeatedBytes-CopyPart- 00:07:56.929 [2024-12-16 12:28:02.442403] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:57.188 #57 NEW cov: 11241 ft: 14181 corp: 3/17b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 CrossOver- 00:07:57.188 [2024-12-16 12:28:02.612276] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:57.188 NEW_FUNC[1/1]: 0x1c24b08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:57.188 #58 NEW cov: 11258 ft: 15070 corp: 4/25b lim: 8 exec/s: 0 rss: 76Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:57.446 [2024-12-16 12:28:02.779966] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:57.446 #60 NEW cov: 11258 ft: 15903 corp: 5/33b lim: 8 exec/s: 0 rss: 76Mb L: 8/8 MS: 2 InsertRepeatedBytes-InsertByte- 00:07:57.446 [2024-12-16 12:28:02.961251] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:57.738 #61 NEW cov: 11261 ft: 16395 corp: 6/41b lim: 8 exec/s: 61 rss: 76Mb L: 8/8 MS: 1 CopyPart- 00:07:57.738 [2024-12-16 12:28:03.133741] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:57.738 #62 NEW cov: 11261 ft: 16444 corp: 7/49b lim: 8 exec/s: 62 rss: 76Mb L: 8/8 MS: 1 ChangeByte- 00:07:58.028 [2024-12-16 12:28:03.312461] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:58.028 #63 NEW cov: 11261 ft: 16608 corp: 8/57b lim: 8 exec/s: 63 rss: 76Mb L: 8/8 MS: 1 CopyPart- 00:07:58.028 [2024-12-16 12:28:03.480654] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:58.028 #64 NEW cov: 11261 ft: 16662 corp: 9/65b lim: 8 exec/s: 64 rss: 76Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:58.286 [2024-12-16 12:28:03.647893] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:58.287 #70 NEW cov: 11268 ft: 16968 corp: 10/73b lim: 8 exec/s: 70 rss: 76Mb L: 8/8 MS: 1 CopyPart- 00:07:58.287 [2024-12-16 12:28:03.817710] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:58.545 #71 NEW cov: 11268 ft: 17086 corp: 11/81b lim: 8 exec/s: 71 rss: 77Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:58.545 [2024-12-16 12:28:03.978627] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:58.545 #77 NEW cov: 11268 ft: 17410 corp: 12/89b lim: 8 exec/s: 38 rss: 77Mb L: 8/8 MS: 1 CrossOver- 00:07:58.545 #77 DONE cov: 11268 ft: 17410 corp: 12/89b lim: 8 exec/s: 38 rss: 77Mb 00:07:58.545 Done 77 runs in 2 second(s) 00:07:58.545 [2024-12-16 12:28:04.101806] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:58.805 12:28:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:58.805 12:28:04 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:58.805 12:28:04 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:58.805 12:28:04 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:58.805 12:28:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:58.805 12:28:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:58.805 12:28:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:58.805 12:28:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:58.805 12:28:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:58.805 12:28:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:58.805 12:28:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:58.805 12:28:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:58.805 12:28:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:58.805 12:28:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:58.805 12:28:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:58.805 12:28:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:58.805 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:58.805 12:28:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:58.805 12:28:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:58.805 12:28:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:58.805 [2024-12-16 12:28:04.366061] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:07:58.805 [2024-12-16 12:28:04.366133] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996667 ] 00:07:59.064 [2024-12-16 12:28:04.444353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.064 [2024-12-16 12:28:04.484417] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.323 INFO: Running with entropic power schedule (0xFF, 100). 00:07:59.324 INFO: Seed: 23867584 00:07:59.324 INFO: Loaded 1 modules (388184 inline 8-bit counters): 388184 [0x2c4b80c, 0x2caa464), 00:07:59.324 INFO: Loaded 1 PC tables (388184 PCs): 388184 [0x2caa468,0x32969e8), 00:07:59.324 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:59.324 INFO: A corpus is not provided, starting from an empty corpus 00:07:59.324 #2 INITED exec/s: 0 rss: 67Mb 00:07:59.324 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:59.324 This may also happen if the target rejected all inputs we tried so far 00:07:59.324 [2024-12-16 12:28:04.723024] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:59.841 NEW_FUNC[1/677]: 0x43cc88 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:59.841 NEW_FUNC[2/677]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:59.841 #98 NEW cov: 11239 ft: 11155 corp: 2/33b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:07:59.841 #109 NEW cov: 11256 ft: 14527 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:08:00.100 NEW_FUNC[1/1]: 0x1c24b08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:08:00.100 #110 NEW cov: 11273 ft: 15609 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:08:00.358 #111 NEW cov: 11273 ft: 16045 corp: 5/129b lim: 32 exec/s: 111 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:08:00.358 #112 NEW cov: 11273 ft: 16334 corp: 6/161b lim: 32 exec/s: 112 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:08:00.617 #117 NEW cov: 11273 ft: 16653 corp: 7/193b lim: 32 exec/s: 117 rss: 76Mb L: 32/32 MS: 5 EraseBytes-ChangeByte-InsertRepeatedBytes-InsertRepeatedBytes-CopyPart- 00:08:00.876 #118 NEW cov: 11273 ft: 16694 corp: 8/225b lim: 32 exec/s: 118 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:08:00.876 #119 NEW cov: 11273 ft: 16848 corp: 9/257b lim: 32 exec/s: 119 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:01.134 #120 NEW cov: 11280 ft: 17241 corp: 10/289b lim: 32 exec/s: 120 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:01.393 #121 NEW cov: 11280 ft: 17319 corp: 11/321b lim: 32 exec/s: 60 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:08:01.393 #121 DONE cov: 11280 ft: 17319 corp: 11/321b lim: 32 exec/s: 60 rss: 76Mb 00:08:01.393 Done 121 runs in 2 second(s) 00:08:01.393 [2024-12-16 12:28:06.795817] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:08:01.652 12:28:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:08:01.652 12:28:07 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:01.652 12:28:07 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:01.652 12:28:07 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:01.652 12:28:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:01.652 12:28:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:01.652 12:28:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:01.652 12:28:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:01.652 12:28:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:01.652 12:28:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:01.652 12:28:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:01.652 12:28:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:01.652 12:28:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:01.652 12:28:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:01.652 12:28:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:01.652 12:28:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:01.652 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:01.653 12:28:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:01.653 12:28:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:01.653 12:28:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:01.653 [2024-12-16 12:28:07.063709] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:08:01.653 [2024-12-16 12:28:07.063784] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid997195 ] 00:08:01.653 [2024-12-16 12:28:07.143071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.653 [2024-12-16 12:28:07.182379] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.912 INFO: Running with entropic power schedule (0xFF, 100). 00:08:01.912 INFO: Seed: 2722882513 00:08:01.912 INFO: Loaded 1 modules (388184 inline 8-bit counters): 388184 [0x2c4b80c, 0x2caa464), 00:08:01.912 INFO: Loaded 1 PC tables (388184 PCs): 388184 [0x2caa468,0x32969e8), 00:08:01.912 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:01.912 INFO: A corpus is not provided, starting from an empty corpus 00:08:01.912 #2 INITED exec/s: 0 rss: 66Mb 00:08:01.912 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:01.912 This may also happen if the target rejected all inputs we tried so far 00:08:01.912 [2024-12-16 12:28:07.425574] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:08:02.429 NEW_FUNC[1/677]: 0x43d508 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:08:02.429 NEW_FUNC[2/677]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:02.429 #45 NEW cov: 11237 ft: 11146 corp: 2/33b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 3 InsertRepeatedBytes-CopyPart-InsertByte- 00:08:02.688 #46 NEW cov: 11251 ft: 14314 corp: 3/65b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:02.688 NEW_FUNC[1/1]: 0x1c24b08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:08:02.688 #47 NEW cov: 11271 ft: 14675 corp: 4/97b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:02.946 #49 NEW cov: 11271 ft: 15791 corp: 5/129b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 2 EraseBytes-CopyPart- 00:08:03.206 #50 NEW cov: 11271 ft: 16217 corp: 6/161b lim: 32 exec/s: 50 rss: 75Mb L: 32/32 MS: 1 CMP- DE: "\001\000\000\026"- 00:08:03.206 #53 NEW cov: 11271 ft: 16481 corp: 7/193b lim: 32 exec/s: 53 rss: 75Mb L: 32/32 MS: 3 EraseBytes-ShuffleBytes-CrossOver- 00:08:03.465 #54 NEW cov: 11271 ft: 16935 corp: 8/225b lim: 32 exec/s: 54 rss: 75Mb L: 32/32 MS: 1 PersAutoDict- DE: "\001\000\000\026"- 00:08:03.723 #55 NEW cov: 11271 ft: 17332 corp: 9/257b lim: 32 exec/s: 55 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:08:03.982 #56 NEW cov: 11278 ft: 17482 corp: 10/289b lim: 32 exec/s: 56 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:03.982 #57 NEW cov: 11278 ft: 17775 corp: 11/321b lim: 32 exec/s: 28 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:03.982 #57 DONE cov: 11278 ft: 17775 corp: 11/321b lim: 32 exec/s: 28 rss: 75Mb 00:08:03.982 ###### Recommended dictionary. ###### 00:08:03.982 "\001\000\000\026" # Uses: 1 00:08:03.982 ###### End of recommended dictionary. ###### 00:08:03.982 Done 57 runs in 2 second(s) 00:08:03.982 [2024-12-16 12:28:09.499804] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:08:04.241 12:28:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:08:04.241 12:28:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:04.241 12:28:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:04.241 12:28:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:04.241 12:28:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:08:04.241 12:28:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:04.241 12:28:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:04.241 12:28:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:04.241 12:28:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:08:04.241 12:28:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:08:04.241 12:28:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:08:04.241 12:28:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:08:04.241 12:28:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:04.241 12:28:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:04.241 12:28:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:04.241 12:28:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:08:04.241 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:04.241 12:28:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:04.241 12:28:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:04.241 12:28:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:08:04.241 [2024-12-16 12:28:09.766400] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:08:04.241 [2024-12-16 12:28:09.766471] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid997652 ] 00:08:04.500 [2024-12-16 12:28:09.848758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.500 [2024-12-16 12:28:09.892012] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.758 INFO: Running with entropic power schedule (0xFF, 100). 00:08:04.758 INFO: Seed: 1133920551 00:08:04.758 INFO: Loaded 1 modules (388184 inline 8-bit counters): 388184 [0x2c4b80c, 0x2caa464), 00:08:04.758 INFO: Loaded 1 PC tables (388184 PCs): 388184 [0x2caa468,0x32969e8), 00:08:04.759 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:04.759 INFO: A corpus is not provided, starting from an empty corpus 00:08:04.759 #2 INITED exec/s: 0 rss: 67Mb 00:08:04.759 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:04.759 This may also happen if the target rejected all inputs we tried so far 00:08:04.759 [2024-12-16 12:28:10.137717] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:08:04.759 [2024-12-16 12:28:10.178719] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:04.759 [2024-12-16 12:28:10.178756] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:05.017 NEW_FUNC[1/677]: 0x43df08 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:08:05.017 NEW_FUNC[2/677]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:05.017 #40 NEW cov: 11238 ft: 11207 corp: 2/14b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 3 CMP-ChangeByte-CopyPart- DE: "W\003\000\000\000\000\000\000"- 00:08:05.276 [2024-12-16 12:28:10.642034] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:05.276 [2024-12-16 12:28:10.642078] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:05.276 NEW_FUNC[1/1]: 0x18da158 in nvme_pcie_qpair_process_completions /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_pcie_common.c:866 00:08:05.276 #51 NEW cov: 11257 ft: 13866 corp: 3/27b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:05.276 [2024-12-16 12:28:10.811230] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:05.276 [2024-12-16 12:28:10.811262] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:05.534 NEW_FUNC[1/1]: 0x1c24b08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:08:05.534 #52 NEW cov: 11274 ft: 15016 corp: 4/40b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 ChangeByte- 00:08:05.534 [2024-12-16 12:28:10.980299] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:05.534 [2024-12-16 12:28:10.980329] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:05.534 #53 NEW cov: 11274 ft: 15923 corp: 5/53b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 ChangeBit- 00:08:05.793 [2024-12-16 12:28:11.145257] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:05.793 [2024-12-16 12:28:11.145287] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:05.793 #59 NEW cov: 11274 ft: 16100 corp: 6/66b lim: 13 exec/s: 59 rss: 76Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:05.793 [2024-12-16 12:28:11.310122] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:05.793 [2024-12-16 12:28:11.310152] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:06.052 #65 NEW cov: 11274 ft: 16350 corp: 7/79b lim: 13 exec/s: 65 rss: 76Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:06.052 [2024-12-16 12:28:11.475127] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:06.052 [2024-12-16 12:28:11.475156] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:06.052 #76 NEW cov: 11274 ft: 16390 corp: 8/92b lim: 13 exec/s: 76 rss: 76Mb L: 13/13 MS: 1 CrossOver- 00:08:06.311 [2024-12-16 12:28:11.639098] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:06.311 [2024-12-16 12:28:11.639129] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:06.311 #77 NEW cov: 11274 ft: 16736 corp: 9/105b lim: 13 exec/s: 77 rss: 76Mb L: 13/13 MS: 1 CrossOver- 00:08:06.311 [2024-12-16 12:28:11.803925] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:06.311 [2024-12-16 12:28:11.803953] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:06.570 #78 NEW cov: 11281 ft: 16788 corp: 10/118b lim: 13 exec/s: 78 rss: 77Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:06.570 [2024-12-16 12:28:11.970307] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:06.570 [2024-12-16 12:28:11.970336] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:06.570 #79 NEW cov: 11281 ft: 16816 corp: 11/131b lim: 13 exec/s: 79 rss: 77Mb L: 13/13 MS: 1 CopyPart- 00:08:06.570 [2024-12-16 12:28:12.133640] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:06.570 [2024-12-16 12:28:12.133670] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:06.829 #80 NEW cov: 11281 ft: 17486 corp: 12/144b lim: 13 exec/s: 40 rss: 77Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:06.829 #80 DONE cov: 11281 ft: 17486 corp: 12/144b lim: 13 exec/s: 40 rss: 77Mb 00:08:06.830 ###### Recommended dictionary. ###### 00:08:06.830 "W\003\000\000\000\000\000\000" # Uses: 3 00:08:06.830 ###### End of recommended dictionary. ###### 00:08:06.830 Done 80 runs in 2 second(s) 00:08:06.830 [2024-12-16 12:28:12.250808] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:08:07.089 12:28:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:08:07.089 12:28:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:07.089 12:28:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:07.089 12:28:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:07.089 12:28:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:08:07.089 12:28:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:07.089 12:28:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:07.089 12:28:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:07.089 12:28:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:08:07.089 12:28:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:08:07.089 12:28:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:08:07.089 12:28:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:08:07.089 12:28:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:07.089 12:28:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:07.089 12:28:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:07.089 12:28:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:08:07.089 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:07.089 12:28:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:07.089 12:28:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:07.089 12:28:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:08:07.089 [2024-12-16 12:28:12.517873] Starting SPDK v25.01-pre git sha1 a393e5e6e / DPDK 24.03.0 initialization... 00:08:07.089 [2024-12-16 12:28:12.517945] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid998040 ] 00:08:07.089 [2024-12-16 12:28:12.599711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.089 [2024-12-16 12:28:12.640388] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.348 INFO: Running with entropic power schedule (0xFF, 100). 00:08:07.348 INFO: Seed: 3886911849 00:08:07.348 INFO: Loaded 1 modules (388184 inline 8-bit counters): 388184 [0x2c4b80c, 0x2caa464), 00:08:07.348 INFO: Loaded 1 PC tables (388184 PCs): 388184 [0x2caa468,0x32969e8), 00:08:07.348 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:07.348 INFO: A corpus is not provided, starting from an empty corpus 00:08:07.348 #2 INITED exec/s: 0 rss: 67Mb 00:08:07.348 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:07.348 This may also happen if the target rejected all inputs we tried so far 00:08:07.348 [2024-12-16 12:28:12.887118] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:08:07.606 [2024-12-16 12:28:12.932638] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:07.606 [2024-12-16 12:28:12.932671] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:07.865 NEW_FUNC[1/678]: 0x43ebf8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:08:07.865 NEW_FUNC[2/678]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:07.865 #12 NEW cov: 11241 ft: 11191 corp: 2/10b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 5 CrossOver-InsertRepeatedBytes-CrossOver-ChangeBit-CopyPart- 00:08:07.865 [2024-12-16 12:28:13.390567] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:07.865 [2024-12-16 12:28:13.390612] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:08.124 #25 NEW cov: 11255 ft: 14317 corp: 3/19b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 3 EraseBytes-ChangeByte-CMP- DE: "\000\000\000\000"- 00:08:08.124 [2024-12-16 12:28:13.561834] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:08.124 [2024-12-16 12:28:13.561865] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:08.124 NEW_FUNC[1/1]: 0x1c24b08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:08:08.124 #26 NEW cov: 11272 ft: 15161 corp: 4/28b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:08.383 [2024-12-16 12:28:13.731699] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:08.383 [2024-12-16 12:28:13.731730] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:08.383 #27 NEW cov: 11272 ft: 15336 corp: 5/37b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 1 ChangeByte- 00:08:08.383 [2024-12-16 12:28:13.903052] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:08.383 [2024-12-16 12:28:13.903079] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:08.642 #30 NEW cov: 11272 ft: 15537 corp: 6/46b lim: 9 exec/s: 30 rss: 76Mb L: 9/9 MS: 3 EraseBytes-ChangeBit-InsertByte- 00:08:08.642 [2024-12-16 12:28:14.079276] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:08.642 [2024-12-16 12:28:14.079306] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:08.642 #31 NEW cov: 11272 ft: 16206 corp: 7/55b lim: 9 exec/s: 31 rss: 76Mb L: 9/9 MS: 1 ChangeByte- 00:08:08.901 [2024-12-16 12:28:14.250885] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:08.901 [2024-12-16 12:28:14.250915] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:08.901 #32 NEW cov: 11272 ft: 16507 corp: 8/64b lim: 9 exec/s: 32 rss: 76Mb L: 9/9 MS: 1 CrossOver- 00:08:08.901 [2024-12-16 12:28:14.415879] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:08.901 [2024-12-16 12:28:14.415908] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:09.159 #33 NEW cov: 11272 ft: 17030 corp: 9/73b lim: 9 exec/s: 33 rss: 76Mb L: 9/9 MS: 1 CrossOver- 00:08:09.159 [2024-12-16 12:28:14.583434] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:09.159 [2024-12-16 12:28:14.583463] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:09.159 #34 NEW cov: 11279 ft: 17445 corp: 10/82b lim: 9 exec/s: 34 rss: 77Mb L: 9/9 MS: 1 CopyPart- 00:08:09.418 [2024-12-16 12:28:14.748689] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:09.418 [2024-12-16 12:28:14.748717] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:09.418 #39 NEW cov: 11279 ft: 17873 corp: 11/91b lim: 9 exec/s: 19 rss: 77Mb L: 9/9 MS: 5 EraseBytes-ChangeBinInt-ChangeByte-ChangeBit-InsertByte- 00:08:09.418 #39 DONE cov: 11279 ft: 17873 corp: 11/91b lim: 9 exec/s: 19 rss: 77Mb 00:08:09.418 ###### Recommended dictionary. ###### 00:08:09.418 "\000\000\000\000" # Uses: 0 00:08:09.418 ###### End of recommended dictionary. ###### 00:08:09.418 Done 39 runs in 2 second(s) 00:08:09.418 [2024-12-16 12:28:14.871799] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:08:09.676 12:28:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:08:09.676 12:28:15 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:09.676 12:28:15 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:09.676 12:28:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:08:09.676 00:08:09.676 real 0m19.408s 00:08:09.676 user 0m27.452s 00:08:09.676 sys 0m1.787s 00:08:09.676 12:28:15 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.676 12:28:15 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:09.676 ************************************ 00:08:09.676 END TEST vfio_llvm_fuzz 00:08:09.676 ************************************ 00:08:09.676 00:08:09.676 real 1m23.660s 00:08:09.676 user 2m7.699s 00:08:09.676 sys 0m9.605s 00:08:09.676 12:28:15 llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.676 12:28:15 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:09.676 ************************************ 00:08:09.676 END TEST llvm_fuzz 00:08:09.676 ************************************ 00:08:09.676 12:28:15 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:08:09.676 12:28:15 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:08:09.676 12:28:15 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:08:09.676 12:28:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:09.676 12:28:15 -- common/autotest_common.sh@10 -- # set +x 00:08:09.676 12:28:15 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:08:09.676 12:28:15 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:08:09.676 12:28:15 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:08:09.676 12:28:15 -- common/autotest_common.sh@10 -- # set +x 00:08:16.244 INFO: APP EXITING 00:08:16.244 INFO: killing all VMs 00:08:16.244 INFO: killing vhost app 00:08:16.244 INFO: EXIT DONE 00:08:18.779 Waiting for block devices as requested 00:08:18.779 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:18.779 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:18.779 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:18.779 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:19.038 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:19.038 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:19.038 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:19.038 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:19.297 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:19.297 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:19.297 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:19.556 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:19.556 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:19.556 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:19.815 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:19.815 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:19.815 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:08:24.010 Cleaning 00:08:24.010 Removing: /dev/shm/spdk_tgt_trace.pid970038 00:08:24.010 Removing: /var/run/dpdk/spdk_pid967585 00:08:24.010 Removing: /var/run/dpdk/spdk_pid968743 00:08:24.010 Removing: /var/run/dpdk/spdk_pid970038 00:08:24.010 Removing: /var/run/dpdk/spdk_pid970513 00:08:24.010 Removing: /var/run/dpdk/spdk_pid971599 00:08:24.010 Removing: /var/run/dpdk/spdk_pid971747 00:08:24.010 Removing: /var/run/dpdk/spdk_pid972729 00:08:24.010 Removing: /var/run/dpdk/spdk_pid972749 00:08:24.010 Removing: /var/run/dpdk/spdk_pid973176 00:08:24.010 Removing: /var/run/dpdk/spdk_pid973498 00:08:24.010 Removing: /var/run/dpdk/spdk_pid973821 00:08:24.010 Removing: /var/run/dpdk/spdk_pid974158 00:08:24.010 Removing: /var/run/dpdk/spdk_pid974411 00:08:24.010 Removing: /var/run/dpdk/spdk_pid974560 00:08:24.010 Removing: /var/run/dpdk/spdk_pid974809 00:08:24.010 Removing: /var/run/dpdk/spdk_pid975125 00:08:24.010 Removing: /var/run/dpdk/spdk_pid975842 00:08:24.010 Removing: /var/run/dpdk/spdk_pid978887 00:08:24.010 Removing: /var/run/dpdk/spdk_pid979179 00:08:24.010 Removing: /var/run/dpdk/spdk_pid979474 00:08:24.010 Removing: /var/run/dpdk/spdk_pid979482 00:08:24.010 Removing: /var/run/dpdk/spdk_pid980050 00:08:24.010 Removing: /var/run/dpdk/spdk_pid980204 00:08:24.010 Removing: /var/run/dpdk/spdk_pid980738 00:08:24.010 Removing: /var/run/dpdk/spdk_pid980882 00:08:24.010 Removing: /var/run/dpdk/spdk_pid981153 00:08:24.010 Removing: /var/run/dpdk/spdk_pid981181 00:08:24.010 Removing: /var/run/dpdk/spdk_pid981421 00:08:24.010 Removing: /var/run/dpdk/spdk_pid981486 00:08:24.010 Removing: /var/run/dpdk/spdk_pid981941 00:08:24.010 Removing: /var/run/dpdk/spdk_pid982154 00:08:24.010 Removing: /var/run/dpdk/spdk_pid982434 00:08:24.010 Removing: /var/run/dpdk/spdk_pid982758 00:08:24.010 Removing: /var/run/dpdk/spdk_pid983275 00:08:24.010 Removing: /var/run/dpdk/spdk_pid983802 00:08:24.010 Removing: /var/run/dpdk/spdk_pid984136 00:08:24.010 Removing: /var/run/dpdk/spdk_pid984623 00:08:24.010 Removing: /var/run/dpdk/spdk_pid985274 00:08:24.010 Removing: /var/run/dpdk/spdk_pid985880 00:08:24.010 Removing: /var/run/dpdk/spdk_pid986534 00:08:24.010 Removing: /var/run/dpdk/spdk_pid987067 00:08:24.010 Removing: /var/run/dpdk/spdk_pid987454 00:08:24.010 Removing: /var/run/dpdk/spdk_pid987903 00:08:24.010 Removing: /var/run/dpdk/spdk_pid988432 00:08:24.010 Removing: /var/run/dpdk/spdk_pid988899 00:08:24.010 Removing: /var/run/dpdk/spdk_pid989264 00:08:24.010 Removing: /var/run/dpdk/spdk_pid989793 00:08:24.010 Removing: /var/run/dpdk/spdk_pid990224 00:08:24.010 Removing: /var/run/dpdk/spdk_pid990611 00:08:24.010 Removing: /var/run/dpdk/spdk_pid991142 00:08:24.010 Removing: /var/run/dpdk/spdk_pid991462 00:08:24.010 Removing: /var/run/dpdk/spdk_pid991965 00:08:24.010 Removing: /var/run/dpdk/spdk_pid992471 00:08:24.010 Removing: /var/run/dpdk/spdk_pid992784 00:08:24.010 Removing: /var/run/dpdk/spdk_pid993315 00:08:24.010 Removing: /var/run/dpdk/spdk_pid993739 00:08:24.010 Removing: /var/run/dpdk/spdk_pid994139 00:08:24.010 Removing: /var/run/dpdk/spdk_pid994674 00:08:24.010 Removing: /var/run/dpdk/spdk_pid995299 00:08:24.010 Removing: /var/run/dpdk/spdk_pid995715 00:08:24.010 Removing: /var/run/dpdk/spdk_pid996122 00:08:24.010 Removing: /var/run/dpdk/spdk_pid996667 00:08:24.010 Removing: /var/run/dpdk/spdk_pid997195 00:08:24.010 Removing: /var/run/dpdk/spdk_pid997652 00:08:24.010 Removing: /var/run/dpdk/spdk_pid998040 00:08:24.010 Clean 00:08:24.010 12:28:29 -- common/autotest_common.sh@1453 -- # return 0 00:08:24.010 12:28:29 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:08:24.010 12:28:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:24.010 12:28:29 -- common/autotest_common.sh@10 -- # set +x 00:08:24.010 12:28:29 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:08:24.010 12:28:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:24.010 12:28:29 -- common/autotest_common.sh@10 -- # set +x 00:08:24.010 12:28:29 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:24.010 12:28:29 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:24.010 12:28:29 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:24.010 12:28:29 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:08:24.010 12:28:29 -- spdk/autotest.sh@398 -- # hostname 00:08:24.010 12:28:29 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-20 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:08:24.010 geninfo: WARNING: invalid characters removed from testname! 00:08:27.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:08:32.602 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:08:36.797 12:28:41 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:43.473 12:28:48 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:48.749 12:28:54 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:54.025 12:28:59 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:59.301 12:29:04 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:04.576 12:29:09 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:09.849 12:29:15 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:09:09.849 12:29:15 -- spdk/autorun.sh@1 -- $ timing_finish 00:09:09.849 12:29:15 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt ]] 00:09:09.849 12:29:15 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:09:09.849 12:29:15 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:09:09.849 12:29:15 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:09.849 + [[ -n 855537 ]] 00:09:09.849 + sudo kill 855537 00:09:09.857 [Pipeline] } 00:09:09.871 [Pipeline] // stage 00:09:09.876 [Pipeline] } 00:09:09.890 [Pipeline] // timeout 00:09:09.894 [Pipeline] } 00:09:09.908 [Pipeline] // catchError 00:09:09.912 [Pipeline] } 00:09:09.925 [Pipeline] // wrap 00:09:09.930 [Pipeline] } 00:09:09.944 [Pipeline] // catchError 00:09:09.952 [Pipeline] stage 00:09:09.954 [Pipeline] { (Epilogue) 00:09:09.965 [Pipeline] catchError 00:09:09.967 [Pipeline] { 00:09:09.978 [Pipeline] echo 00:09:09.980 Cleanup processes 00:09:09.985 [Pipeline] sh 00:09:10.268 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:10.268 1006422 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:10.282 [Pipeline] sh 00:09:10.566 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:10.566 ++ grep -v 'sudo pgrep' 00:09:10.566 ++ awk '{print $1}' 00:09:10.566 + sudo kill -9 00:09:10.566 + true 00:09:10.578 [Pipeline] sh 00:09:10.864 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:09:10.864 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:10.864 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:12.239 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:24.450 [Pipeline] sh 00:09:24.733 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:09:24.733 Artifacts sizes are good 00:09:24.747 [Pipeline] archiveArtifacts 00:09:24.754 Archiving artifacts 00:09:24.911 [Pipeline] sh 00:09:25.238 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:09:25.252 [Pipeline] cleanWs 00:09:25.261 [WS-CLEANUP] Deleting project workspace... 00:09:25.261 [WS-CLEANUP] Deferred wipeout is used... 00:09:25.268 [WS-CLEANUP] done 00:09:25.270 [Pipeline] } 00:09:25.287 [Pipeline] // catchError 00:09:25.299 [Pipeline] sh 00:09:25.577 + logger -p user.info -t JENKINS-CI 00:09:25.585 [Pipeline] } 00:09:25.599 [Pipeline] // stage 00:09:25.605 [Pipeline] } 00:09:25.619 [Pipeline] // node 00:09:25.624 [Pipeline] End of Pipeline 00:09:25.679 Finished: SUCCESS