00:00:00.000 Started by upstream project "autotest-nightly" build number 4353 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3716 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.042 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.043 The recommended git tool is: git 00:00:00.043 using credential 00000000-0000-0000-0000-000000000002 00:00:00.045 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.064 Fetching changes from the remote Git repository 00:00:00.066 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.099 Using shallow fetch with depth 1 00:00:00.099 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.099 > git --version # timeout=10 00:00:00.154 > git --version # 'git version 2.39.2' 00:00:00.154 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.197 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.197 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.486 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.497 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.508 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.509 > git config core.sparsecheckout # timeout=10 00:00:05.521 > git read-tree -mu HEAD # timeout=10 00:00:05.536 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.554 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.554 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.674 [Pipeline] Start of Pipeline 00:00:05.685 [Pipeline] library 00:00:05.686 Loading library shm_lib@master 00:00:05.686 Library shm_lib@master is cached. Copying from home. 00:00:05.701 [Pipeline] node 00:00:05.717 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:05.718 [Pipeline] { 00:00:05.725 [Pipeline] catchError 00:00:05.726 [Pipeline] { 00:00:05.736 [Pipeline] wrap 00:00:05.744 [Pipeline] { 00:00:05.751 [Pipeline] stage 00:00:05.753 [Pipeline] { (Prologue) 00:00:05.981 [Pipeline] sh 00:00:06.268 + logger -p user.info -t JENKINS-CI 00:00:06.287 [Pipeline] echo 00:00:06.289 Node: WFP20 00:00:06.296 [Pipeline] sh 00:00:06.595 [Pipeline] setCustomBuildProperty 00:00:06.606 [Pipeline] echo 00:00:06.607 Cleanup processes 00:00:06.612 [Pipeline] sh 00:00:06.896 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.896 1024959 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.912 [Pipeline] sh 00:00:07.197 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:07.197 ++ grep -v 'sudo pgrep' 00:00:07.197 ++ awk '{print $1}' 00:00:07.197 + sudo kill -9 00:00:07.197 + true 00:00:07.211 [Pipeline] cleanWs 00:00:07.221 [WS-CLEANUP] Deleting project workspace... 00:00:07.221 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.227 [WS-CLEANUP] done 00:00:07.231 [Pipeline] setCustomBuildProperty 00:00:07.243 [Pipeline] sh 00:00:07.523 + sudo git config --global --replace-all safe.directory '*' 00:00:07.628 [Pipeline] httpRequest 00:00:08.044 [Pipeline] echo 00:00:08.045 Sorcerer 10.211.164.20 is alive 00:00:08.054 [Pipeline] retry 00:00:08.055 [Pipeline] { 00:00:08.065 [Pipeline] httpRequest 00:00:08.068 HttpMethod: GET 00:00:08.069 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.070 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.097 Response Code: HTTP/1.1 200 OK 00:00:08.097 Success: Status code 200 is in the accepted range: 200,404 00:00:08.098 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:32.461 [Pipeline] } 00:00:32.477 [Pipeline] // retry 00:00:32.485 [Pipeline] sh 00:00:32.771 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:32.787 [Pipeline] httpRequest 00:00:33.198 [Pipeline] echo 00:00:33.200 Sorcerer 10.211.164.20 is alive 00:00:33.210 [Pipeline] retry 00:00:33.212 [Pipeline] { 00:00:33.226 [Pipeline] httpRequest 00:00:33.231 HttpMethod: GET 00:00:33.231 URL: http://10.211.164.20/packages/spdk_d58eef2a29f5d65b15a72162d9d79db68f27aa81.tar.gz 00:00:33.232 Sending request to url: http://10.211.164.20/packages/spdk_d58eef2a29f5d65b15a72162d9d79db68f27aa81.tar.gz 00:00:33.247 Response Code: HTTP/1.1 200 OK 00:00:33.247 Success: Status code 200 is in the accepted range: 200,404 00:00:33.247 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_d58eef2a29f5d65b15a72162d9d79db68f27aa81.tar.gz 00:01:25.843 [Pipeline] } 00:01:25.875 [Pipeline] // retry 00:01:25.880 [Pipeline] sh 00:01:26.158 + tar --no-same-owner -xf spdk_d58eef2a29f5d65b15a72162d9d79db68f27aa81.tar.gz 00:01:28.706 [Pipeline] sh 00:01:28.993 + git -C spdk log --oneline -n5 00:01:28.993 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:28.993 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:28.993 66289a6db build: use VERSION file for storing version 00:01:28.993 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:28.993 cec5ba284 nvme/rdma: Register UMR per IO request 00:01:29.004 [Pipeline] } 00:01:29.018 [Pipeline] // stage 00:01:29.026 [Pipeline] stage 00:01:29.029 [Pipeline] { (Prepare) 00:01:29.044 [Pipeline] writeFile 00:01:29.059 [Pipeline] sh 00:01:29.415 + logger -p user.info -t JENKINS-CI 00:01:29.428 [Pipeline] sh 00:01:29.713 + logger -p user.info -t JENKINS-CI 00:01:29.725 [Pipeline] sh 00:01:30.009 + cat autorun-spdk.conf 00:01:30.009 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.009 SPDK_TEST_FUZZER_SHORT=1 00:01:30.009 SPDK_TEST_FUZZER=1 00:01:30.009 SPDK_TEST_SETUP=1 00:01:30.009 SPDK_RUN_UBSAN=1 00:01:30.017 RUN_NIGHTLY=1 00:01:30.021 [Pipeline] readFile 00:01:30.045 [Pipeline] withEnv 00:01:30.047 [Pipeline] { 00:01:30.060 [Pipeline] sh 00:01:30.342 + set -ex 00:01:30.342 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:30.342 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:30.342 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.342 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:30.342 ++ SPDK_TEST_FUZZER=1 00:01:30.342 ++ SPDK_TEST_SETUP=1 00:01:30.342 ++ SPDK_RUN_UBSAN=1 00:01:30.342 ++ RUN_NIGHTLY=1 00:01:30.342 + case $SPDK_TEST_NVMF_NICS in 00:01:30.342 + DRIVERS= 00:01:30.342 + [[ -n '' ]] 00:01:30.342 + exit 0 00:01:30.351 [Pipeline] } 00:01:30.364 [Pipeline] // withEnv 00:01:30.369 [Pipeline] } 00:01:30.381 [Pipeline] // stage 00:01:30.390 [Pipeline] catchError 00:01:30.392 [Pipeline] { 00:01:30.406 [Pipeline] timeout 00:01:30.407 Timeout set to expire in 30 min 00:01:30.408 [Pipeline] { 00:01:30.424 [Pipeline] stage 00:01:30.426 [Pipeline] { (Tests) 00:01:30.440 [Pipeline] sh 00:01:30.729 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:30.730 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:30.730 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:30.730 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:30.730 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:30.730 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:30.730 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:30.730 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:30.730 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:30.730 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:30.730 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:30.730 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:30.730 + source /etc/os-release 00:01:30.730 ++ NAME='Fedora Linux' 00:01:30.730 ++ VERSION='39 (Cloud Edition)' 00:01:30.730 ++ ID=fedora 00:01:30.730 ++ VERSION_ID=39 00:01:30.730 ++ VERSION_CODENAME= 00:01:30.730 ++ PLATFORM_ID=platform:f39 00:01:30.730 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:30.730 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:30.730 ++ LOGO=fedora-logo-icon 00:01:30.730 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:30.730 ++ HOME_URL=https://fedoraproject.org/ 00:01:30.730 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:30.730 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:30.730 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:30.730 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:30.730 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:30.730 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:30.730 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:30.730 ++ SUPPORT_END=2024-11-12 00:01:30.730 ++ VARIANT='Cloud Edition' 00:01:30.730 ++ VARIANT_ID=cloud 00:01:30.730 + uname -a 00:01:30.730 Linux spdk-wfp-20 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:30.730 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:33.269 Hugepages 00:01:33.269 node hugesize free / total 00:01:33.269 node0 1048576kB 0 / 0 00:01:33.269 node0 2048kB 0 / 0 00:01:33.269 node1 1048576kB 0 / 0 00:01:33.269 node1 2048kB 0 / 0 00:01:33.269 00:01:33.269 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:33.269 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:33.269 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:33.269 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:33.269 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:33.269 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:33.269 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:33.269 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:33.269 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:33.269 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:33.269 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:33.269 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:33.528 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:33.528 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:33.528 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:33.528 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:33.528 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:33.528 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:33.528 + rm -f /tmp/spdk-ld-path 00:01:33.528 + source autorun-spdk.conf 00:01:33.528 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.528 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:33.528 ++ SPDK_TEST_FUZZER=1 00:01:33.528 ++ SPDK_TEST_SETUP=1 00:01:33.528 ++ SPDK_RUN_UBSAN=1 00:01:33.528 ++ RUN_NIGHTLY=1 00:01:33.528 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:33.528 + [[ -n '' ]] 00:01:33.528 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:33.528 + for M in /var/spdk/build-*-manifest.txt 00:01:33.528 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:33.528 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:33.528 + for M in /var/spdk/build-*-manifest.txt 00:01:33.528 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:33.528 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:33.528 + for M in /var/spdk/build-*-manifest.txt 00:01:33.528 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:33.528 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:33.528 ++ uname 00:01:33.528 + [[ Linux == \L\i\n\u\x ]] 00:01:33.528 + sudo dmesg -T 00:01:33.528 + sudo dmesg --clear 00:01:33.528 + dmesg_pid=1026415 00:01:33.528 + [[ Fedora Linux == FreeBSD ]] 00:01:33.528 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:33.528 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:33.528 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:33.528 + [[ -x /usr/src/fio-static/fio ]] 00:01:33.528 + export FIO_BIN=/usr/src/fio-static/fio 00:01:33.528 + FIO_BIN=/usr/src/fio-static/fio 00:01:33.528 + sudo dmesg -Tw 00:01:33.528 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:33.528 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:33.528 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:33.529 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:33.529 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:33.529 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:33.529 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:33.529 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:33.529 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:33.789 06:39:41 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:33.789 06:39:41 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:33.789 06:39:41 -- short-fuzz-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.789 06:39:41 -- short-fuzz-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_FUZZER_SHORT=1 00:01:33.789 06:39:41 -- short-fuzz-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_FUZZER=1 00:01:33.789 06:39:41 -- short-fuzz-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_SETUP=1 00:01:33.789 06:39:41 -- short-fuzz-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:01:33.789 06:39:41 -- short-fuzz-phy-autotest/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=1 00:01:33.789 06:39:41 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:33.789 06:39:41 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:33.789 06:39:41 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:33.789 06:39:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:33.789 06:39:41 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:33.789 06:39:41 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:33.789 06:39:41 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:33.789 06:39:41 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:33.789 06:39:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.789 06:39:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.789 06:39:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.789 06:39:41 -- paths/export.sh@5 -- $ export PATH 00:01:33.789 06:39:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.789 06:39:41 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:33.789 06:39:41 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:33.789 06:39:41 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733981981.XXXXXX 00:01:33.789 06:39:41 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733981981.RCuBsF 00:01:33.789 06:39:41 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:33.789 06:39:41 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:33.789 06:39:41 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:33.789 06:39:41 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:33.789 06:39:41 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:33.789 06:39:41 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:33.789 06:39:41 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:33.789 06:39:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.789 06:39:41 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:33.789 06:39:41 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:33.789 06:39:41 -- pm/common@17 -- $ local monitor 00:01:33.789 06:39:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:33.789 06:39:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:33.789 06:39:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:33.789 06:39:41 -- pm/common@21 -- $ date +%s 00:01:33.789 06:39:41 -- pm/common@21 -- $ date +%s 00:01:33.789 06:39:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:33.789 06:39:41 -- pm/common@25 -- $ sleep 1 00:01:33.789 06:39:41 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733981981 00:01:33.789 06:39:41 -- pm/common@21 -- $ date +%s 00:01:33.789 06:39:41 -- pm/common@21 -- $ date +%s 00:01:33.789 06:39:41 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733981981 00:01:33.789 06:39:41 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733981981 00:01:33.789 06:39:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733981981 00:01:33.789 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733981981_collect-vmstat.pm.log 00:01:33.789 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733981981_collect-cpu-load.pm.log 00:01:33.789 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733981981_collect-cpu-temp.pm.log 00:01:33.789 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733981981_collect-bmc-pm.bmc.pm.log 00:01:34.728 06:39:42 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:34.728 06:39:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:34.728 06:39:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:34.728 06:39:42 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:34.987 06:39:42 -- spdk/autobuild.sh@16 -- $ date -u 00:01:34.987 Thu Dec 12 05:39:42 AM UTC 2024 00:01:34.987 06:39:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:34.987 v25.01-rc1-1-gd58eef2a2 00:01:34.987 06:39:42 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:34.987 06:39:42 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:34.987 06:39:42 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:34.987 06:39:42 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:34.987 06:39:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:34.987 06:39:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.987 ************************************ 00:01:34.987 START TEST ubsan 00:01:34.987 ************************************ 00:01:34.987 06:39:42 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:34.987 using ubsan 00:01:34.987 00:01:34.987 real 0m0.001s 00:01:34.987 user 0m0.000s 00:01:34.987 sys 0m0.000s 00:01:34.987 06:39:42 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:34.987 06:39:42 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:34.987 ************************************ 00:01:34.987 END TEST ubsan 00:01:34.987 ************************************ 00:01:34.987 06:39:42 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:34.987 06:39:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:34.987 06:39:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:34.987 06:39:42 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:34.987 06:39:42 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:34.987 06:39:42 -- common/autobuild_common.sh@445 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:34.987 06:39:42 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:34.987 06:39:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:34.987 06:39:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.987 ************************************ 00:01:34.987 START TEST autobuild_llvm_precompile 00:01:34.987 ************************************ 00:01:34.987 06:39:42 autobuild_llvm_precompile -- common/autotest_common.sh@1129 -- $ _llvm_precompile 00:01:34.987 06:39:42 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:01:34.987 06:39:42 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:01:34.987 Target: x86_64-redhat-linux-gnu 00:01:34.987 Thread model: posix 00:01:34.987 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:34.987 06:39:42 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:01:34.987 06:39:42 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:01:34.987 06:39:42 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:01:34.987 06:39:42 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:01:34.987 06:39:42 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:01:34.987 06:39:42 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:34.987 06:39:42 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:34.987 06:39:42 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:01:34.987 06:39:42 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:01:34.987 06:39:42 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:35.246 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:35.246 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:35.505 Using 'verbs' RDMA provider 00:01:49.098 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:03.985 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:03.985 Creating mk/config.mk...done. 00:02:03.985 Creating mk/cc.flags.mk...done. 00:02:03.985 Type 'make' to build. 00:02:03.985 00:02:03.985 real 0m28.392s 00:02:03.985 user 0m12.518s 00:02:03.985 sys 0m15.021s 00:02:03.985 06:40:10 autobuild_llvm_precompile -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:03.985 06:40:10 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:02:03.985 ************************************ 00:02:03.985 END TEST autobuild_llvm_precompile 00:02:03.985 ************************************ 00:02:03.985 06:40:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:03.985 06:40:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:03.985 06:40:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:03.985 06:40:10 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:02:03.985 06:40:10 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:02:03.985 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:02:03.985 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:03.985 Using 'verbs' RDMA provider 00:02:17.136 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:29.449 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:29.449 Creating mk/config.mk...done. 00:02:29.449 Creating mk/cc.flags.mk...done. 00:02:29.449 Type 'make' to build. 00:02:29.449 06:40:35 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:02:29.449 06:40:35 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:29.449 06:40:35 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:29.449 06:40:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.449 ************************************ 00:02:29.449 START TEST make 00:02:29.449 ************************************ 00:02:29.449 06:40:35 make -- common/autotest_common.sh@1129 -- $ make -j112 00:02:30.830 The Meson build system 00:02:30.830 Version: 1.5.0 00:02:30.830 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:02:30.830 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:30.830 Build type: native build 00:02:30.830 Project name: libvfio-user 00:02:30.830 Project version: 0.0.1 00:02:30.830 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:30.830 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:30.830 Host machine cpu family: x86_64 00:02:30.830 Host machine cpu: x86_64 00:02:30.830 Run-time dependency threads found: YES 00:02:30.830 Library dl found: YES 00:02:30.830 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:30.830 Run-time dependency json-c found: YES 0.17 00:02:30.830 Run-time dependency cmocka found: YES 1.1.7 00:02:30.830 Program pytest-3 found: NO 00:02:30.830 Program flake8 found: NO 00:02:30.830 Program misspell-fixer found: NO 00:02:30.830 Program restructuredtext-lint found: NO 00:02:30.830 Program valgrind found: YES (/usr/bin/valgrind) 00:02:30.830 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:30.830 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:30.830 Compiler for C supports arguments -Wwrite-strings: YES 00:02:30.830 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:30.830 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:30.830 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:30.830 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:30.830 Build targets in project: 8 00:02:30.831 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:30.831 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:30.831 00:02:30.831 libvfio-user 0.0.1 00:02:30.831 00:02:30.831 User defined options 00:02:30.831 buildtype : debug 00:02:30.831 default_library: static 00:02:30.831 libdir : /usr/local/lib 00:02:30.831 00:02:30.831 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:30.831 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:31.089 [1/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:02:31.089 [2/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:02:31.089 [3/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:02:31.089 [4/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:02:31.089 [5/36] Compiling C object samples/lspci.p/lspci.c.o 00:02:31.089 [6/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:31.089 [7/36] Compiling C object samples/null.p/null.c.o 00:02:31.089 [8/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:31.089 [9/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:02:31.089 [10/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:31.089 [11/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:31.089 [12/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:31.089 [13/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:31.089 [14/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:31.089 [15/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:31.089 [16/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:02:31.089 [17/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:31.089 [18/36] Compiling C object test/unit_tests.p/mocks.c.o 00:02:31.089 [19/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:02:31.089 [20/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:31.089 [21/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:31.089 [22/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:31.089 [23/36] Compiling C object samples/server.p/server.c.o 00:02:31.089 [24/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:31.089 [25/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:31.089 [26/36] Compiling C object samples/client.p/client.c.o 00:02:31.089 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:02:31.089 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:31.089 [29/36] Linking static target lib/libvfio-user.a 00:02:31.089 [30/36] Linking target samples/client 00:02:31.089 [31/36] Linking target test/unit_tests 00:02:31.089 [32/36] Linking target samples/gpio-pci-idio-16 00:02:31.089 [33/36] Linking target samples/lspci 00:02:31.089 [34/36] Linking target samples/server 00:02:31.089 [35/36] Linking target samples/null 00:02:31.089 [36/36] Linking target samples/shadow_ioeventfd_server 00:02:31.089 INFO: autodetecting backend as ninja 00:02:31.089 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:31.347 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:31.605 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:31.605 ninja: no work to do. 00:02:36.870 The Meson build system 00:02:36.870 Version: 1.5.0 00:02:36.870 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:36.870 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:36.870 Build type: native build 00:02:36.870 Program cat found: YES (/usr/bin/cat) 00:02:36.870 Project name: DPDK 00:02:36.870 Project version: 24.03.0 00:02:36.870 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:36.870 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:36.870 Host machine cpu family: x86_64 00:02:36.870 Host machine cpu: x86_64 00:02:36.870 Message: ## Building in Developer Mode ## 00:02:36.870 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:36.870 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:36.870 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:36.870 Program python3 found: YES (/usr/bin/python3) 00:02:36.870 Program cat found: YES (/usr/bin/cat) 00:02:36.870 Compiler for C supports arguments -march=native: YES 00:02:36.870 Checking for size of "void *" : 8 00:02:36.870 Checking for size of "void *" : 8 (cached) 00:02:36.870 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:36.870 Library m found: YES 00:02:36.870 Library numa found: YES 00:02:36.870 Has header "numaif.h" : YES 00:02:36.870 Library fdt found: NO 00:02:36.870 Library execinfo found: NO 00:02:36.870 Has header "execinfo.h" : YES 00:02:36.870 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:36.870 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:36.870 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:36.870 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:36.870 Run-time dependency openssl found: YES 3.1.1 00:02:36.870 Run-time dependency libpcap found: YES 1.10.4 00:02:36.870 Has header "pcap.h" with dependency libpcap: YES 00:02:36.870 Compiler for C supports arguments -Wcast-qual: YES 00:02:36.870 Compiler for C supports arguments -Wdeprecated: YES 00:02:36.870 Compiler for C supports arguments -Wformat: YES 00:02:36.870 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:36.870 Compiler for C supports arguments -Wformat-security: YES 00:02:36.870 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:36.870 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:36.870 Compiler for C supports arguments -Wnested-externs: YES 00:02:36.870 Compiler for C supports arguments -Wold-style-definition: YES 00:02:36.870 Compiler for C supports arguments -Wpointer-arith: YES 00:02:36.870 Compiler for C supports arguments -Wsign-compare: YES 00:02:36.870 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:36.870 Compiler for C supports arguments -Wundef: YES 00:02:36.870 Compiler for C supports arguments -Wwrite-strings: YES 00:02:36.870 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:36.870 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:36.870 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:36.870 Program objdump found: YES (/usr/bin/objdump) 00:02:36.870 Compiler for C supports arguments -mavx512f: YES 00:02:36.870 Checking if "AVX512 checking" compiles: YES 00:02:36.870 Fetching value of define "__SSE4_2__" : 1 00:02:36.870 Fetching value of define "__AES__" : 1 00:02:36.870 Fetching value of define "__AVX__" : 1 00:02:36.870 Fetching value of define "__AVX2__" : 1 00:02:36.870 Fetching value of define "__AVX512BW__" : 1 00:02:36.870 Fetching value of define "__AVX512CD__" : 1 00:02:36.870 Fetching value of define "__AVX512DQ__" : 1 00:02:36.870 Fetching value of define "__AVX512F__" : 1 00:02:36.870 Fetching value of define "__AVX512VL__" : 1 00:02:36.870 Fetching value of define "__PCLMUL__" : 1 00:02:36.870 Fetching value of define "__RDRND__" : 1 00:02:36.870 Fetching value of define "__RDSEED__" : 1 00:02:36.870 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:36.870 Fetching value of define "__znver1__" : (undefined) 00:02:36.870 Fetching value of define "__znver2__" : (undefined) 00:02:36.870 Fetching value of define "__znver3__" : (undefined) 00:02:36.870 Fetching value of define "__znver4__" : (undefined) 00:02:36.870 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:36.870 Message: lib/log: Defining dependency "log" 00:02:36.870 Message: lib/kvargs: Defining dependency "kvargs" 00:02:36.870 Message: lib/telemetry: Defining dependency "telemetry" 00:02:36.870 Checking for function "getentropy" : NO 00:02:36.870 Message: lib/eal: Defining dependency "eal" 00:02:36.870 Message: lib/ring: Defining dependency "ring" 00:02:36.870 Message: lib/rcu: Defining dependency "rcu" 00:02:36.870 Message: lib/mempool: Defining dependency "mempool" 00:02:36.870 Message: lib/mbuf: Defining dependency "mbuf" 00:02:36.870 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:36.870 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:36.870 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:36.870 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:36.870 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:36.870 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:36.870 Compiler for C supports arguments -mpclmul: YES 00:02:36.870 Compiler for C supports arguments -maes: YES 00:02:36.870 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:36.870 Compiler for C supports arguments -mavx512bw: YES 00:02:36.870 Compiler for C supports arguments -mavx512dq: YES 00:02:36.870 Compiler for C supports arguments -mavx512vl: YES 00:02:36.870 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:36.870 Compiler for C supports arguments -mavx2: YES 00:02:36.870 Compiler for C supports arguments -mavx: YES 00:02:36.870 Message: lib/net: Defining dependency "net" 00:02:36.870 Message: lib/meter: Defining dependency "meter" 00:02:36.870 Message: lib/ethdev: Defining dependency "ethdev" 00:02:36.870 Message: lib/pci: Defining dependency "pci" 00:02:36.870 Message: lib/cmdline: Defining dependency "cmdline" 00:02:36.870 Message: lib/hash: Defining dependency "hash" 00:02:36.870 Message: lib/timer: Defining dependency "timer" 00:02:36.870 Message: lib/compressdev: Defining dependency "compressdev" 00:02:36.870 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:36.870 Message: lib/dmadev: Defining dependency "dmadev" 00:02:36.871 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:36.871 Message: lib/power: Defining dependency "power" 00:02:36.871 Message: lib/reorder: Defining dependency "reorder" 00:02:36.871 Message: lib/security: Defining dependency "security" 00:02:36.871 Has header "linux/userfaultfd.h" : YES 00:02:36.871 Has header "linux/vduse.h" : YES 00:02:36.871 Message: lib/vhost: Defining dependency "vhost" 00:02:36.871 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:36.871 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:36.871 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:36.871 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:36.871 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:36.871 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:36.871 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:36.871 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:36.871 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:36.871 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:36.871 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:36.871 Configuring doxy-api-html.conf using configuration 00:02:36.871 Configuring doxy-api-man.conf using configuration 00:02:36.871 Program mandb found: YES (/usr/bin/mandb) 00:02:36.871 Program sphinx-build found: NO 00:02:36.871 Configuring rte_build_config.h using configuration 00:02:36.871 Message: 00:02:36.871 ================= 00:02:36.871 Applications Enabled 00:02:36.871 ================= 00:02:36.871 00:02:36.871 apps: 00:02:36.871 00:02:36.871 00:02:36.871 Message: 00:02:36.871 ================= 00:02:36.871 Libraries Enabled 00:02:36.871 ================= 00:02:36.871 00:02:36.871 libs: 00:02:36.871 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:36.871 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:36.871 cryptodev, dmadev, power, reorder, security, vhost, 00:02:36.871 00:02:36.871 Message: 00:02:36.871 =============== 00:02:36.871 Drivers Enabled 00:02:36.871 =============== 00:02:36.871 00:02:36.871 common: 00:02:36.871 00:02:36.871 bus: 00:02:36.871 pci, vdev, 00:02:36.871 mempool: 00:02:36.871 ring, 00:02:36.871 dma: 00:02:36.871 00:02:36.871 net: 00:02:36.871 00:02:36.871 crypto: 00:02:36.871 00:02:36.871 compress: 00:02:36.871 00:02:36.871 vdpa: 00:02:36.871 00:02:36.871 00:02:36.871 Message: 00:02:36.871 ================= 00:02:36.871 Content Skipped 00:02:36.871 ================= 00:02:36.871 00:02:36.871 apps: 00:02:36.871 dumpcap: explicitly disabled via build config 00:02:36.871 graph: explicitly disabled via build config 00:02:36.871 pdump: explicitly disabled via build config 00:02:36.871 proc-info: explicitly disabled via build config 00:02:36.871 test-acl: explicitly disabled via build config 00:02:36.871 test-bbdev: explicitly disabled via build config 00:02:36.871 test-cmdline: explicitly disabled via build config 00:02:36.871 test-compress-perf: explicitly disabled via build config 00:02:36.871 test-crypto-perf: explicitly disabled via build config 00:02:36.871 test-dma-perf: explicitly disabled via build config 00:02:36.871 test-eventdev: explicitly disabled via build config 00:02:36.871 test-fib: explicitly disabled via build config 00:02:36.871 test-flow-perf: explicitly disabled via build config 00:02:36.871 test-gpudev: explicitly disabled via build config 00:02:36.871 test-mldev: explicitly disabled via build config 00:02:36.871 test-pipeline: explicitly disabled via build config 00:02:36.871 test-pmd: explicitly disabled via build config 00:02:36.871 test-regex: explicitly disabled via build config 00:02:36.871 test-sad: explicitly disabled via build config 00:02:36.871 test-security-perf: explicitly disabled via build config 00:02:36.871 00:02:36.871 libs: 00:02:36.871 argparse: explicitly disabled via build config 00:02:36.871 metrics: explicitly disabled via build config 00:02:36.871 acl: explicitly disabled via build config 00:02:36.871 bbdev: explicitly disabled via build config 00:02:36.871 bitratestats: explicitly disabled via build config 00:02:36.871 bpf: explicitly disabled via build config 00:02:36.871 cfgfile: explicitly disabled via build config 00:02:36.871 distributor: explicitly disabled via build config 00:02:36.871 efd: explicitly disabled via build config 00:02:36.871 eventdev: explicitly disabled via build config 00:02:36.871 dispatcher: explicitly disabled via build config 00:02:36.871 gpudev: explicitly disabled via build config 00:02:36.871 gro: explicitly disabled via build config 00:02:36.871 gso: explicitly disabled via build config 00:02:36.871 ip_frag: explicitly disabled via build config 00:02:36.871 jobstats: explicitly disabled via build config 00:02:36.871 latencystats: explicitly disabled via build config 00:02:36.871 lpm: explicitly disabled via build config 00:02:36.871 member: explicitly disabled via build config 00:02:36.871 pcapng: explicitly disabled via build config 00:02:36.871 rawdev: explicitly disabled via build config 00:02:36.871 regexdev: explicitly disabled via build config 00:02:36.871 mldev: explicitly disabled via build config 00:02:36.871 rib: explicitly disabled via build config 00:02:36.871 sched: explicitly disabled via build config 00:02:36.871 stack: explicitly disabled via build config 00:02:36.871 ipsec: explicitly disabled via build config 00:02:36.871 pdcp: explicitly disabled via build config 00:02:36.871 fib: explicitly disabled via build config 00:02:36.871 port: explicitly disabled via build config 00:02:36.871 pdump: explicitly disabled via build config 00:02:36.871 table: explicitly disabled via build config 00:02:36.871 pipeline: explicitly disabled via build config 00:02:36.871 graph: explicitly disabled via build config 00:02:36.871 node: explicitly disabled via build config 00:02:36.871 00:02:36.871 drivers: 00:02:36.871 common/cpt: not in enabled drivers build config 00:02:36.871 common/dpaax: not in enabled drivers build config 00:02:36.871 common/iavf: not in enabled drivers build config 00:02:36.871 common/idpf: not in enabled drivers build config 00:02:36.871 common/ionic: not in enabled drivers build config 00:02:36.871 common/mvep: not in enabled drivers build config 00:02:36.871 common/octeontx: not in enabled drivers build config 00:02:36.871 bus/auxiliary: not in enabled drivers build config 00:02:36.871 bus/cdx: not in enabled drivers build config 00:02:36.871 bus/dpaa: not in enabled drivers build config 00:02:36.871 bus/fslmc: not in enabled drivers build config 00:02:36.871 bus/ifpga: not in enabled drivers build config 00:02:36.871 bus/platform: not in enabled drivers build config 00:02:36.871 bus/uacce: not in enabled drivers build config 00:02:36.871 bus/vmbus: not in enabled drivers build config 00:02:36.871 common/cnxk: not in enabled drivers build config 00:02:36.871 common/mlx5: not in enabled drivers build config 00:02:36.871 common/nfp: not in enabled drivers build config 00:02:36.871 common/nitrox: not in enabled drivers build config 00:02:36.871 common/qat: not in enabled drivers build config 00:02:36.871 common/sfc_efx: not in enabled drivers build config 00:02:36.871 mempool/bucket: not in enabled drivers build config 00:02:36.871 mempool/cnxk: not in enabled drivers build config 00:02:36.871 mempool/dpaa: not in enabled drivers build config 00:02:36.871 mempool/dpaa2: not in enabled drivers build config 00:02:36.871 mempool/octeontx: not in enabled drivers build config 00:02:36.871 mempool/stack: not in enabled drivers build config 00:02:36.871 dma/cnxk: not in enabled drivers build config 00:02:36.871 dma/dpaa: not in enabled drivers build config 00:02:36.871 dma/dpaa2: not in enabled drivers build config 00:02:36.871 dma/hisilicon: not in enabled drivers build config 00:02:36.871 dma/idxd: not in enabled drivers build config 00:02:36.871 dma/ioat: not in enabled drivers build config 00:02:36.871 dma/skeleton: not in enabled drivers build config 00:02:36.871 net/af_packet: not in enabled drivers build config 00:02:36.871 net/af_xdp: not in enabled drivers build config 00:02:36.871 net/ark: not in enabled drivers build config 00:02:36.871 net/atlantic: not in enabled drivers build config 00:02:36.871 net/avp: not in enabled drivers build config 00:02:36.871 net/axgbe: not in enabled drivers build config 00:02:36.871 net/bnx2x: not in enabled drivers build config 00:02:36.871 net/bnxt: not in enabled drivers build config 00:02:36.871 net/bonding: not in enabled drivers build config 00:02:36.871 net/cnxk: not in enabled drivers build config 00:02:36.871 net/cpfl: not in enabled drivers build config 00:02:36.871 net/cxgbe: not in enabled drivers build config 00:02:36.871 net/dpaa: not in enabled drivers build config 00:02:36.871 net/dpaa2: not in enabled drivers build config 00:02:36.871 net/e1000: not in enabled drivers build config 00:02:36.871 net/ena: not in enabled drivers build config 00:02:36.871 net/enetc: not in enabled drivers build config 00:02:36.871 net/enetfec: not in enabled drivers build config 00:02:36.871 net/enic: not in enabled drivers build config 00:02:36.871 net/failsafe: not in enabled drivers build config 00:02:36.871 net/fm10k: not in enabled drivers build config 00:02:36.871 net/gve: not in enabled drivers build config 00:02:36.871 net/hinic: not in enabled drivers build config 00:02:36.871 net/hns3: not in enabled drivers build config 00:02:36.871 net/i40e: not in enabled drivers build config 00:02:36.871 net/iavf: not in enabled drivers build config 00:02:36.871 net/ice: not in enabled drivers build config 00:02:36.871 net/idpf: not in enabled drivers build config 00:02:36.871 net/igc: not in enabled drivers build config 00:02:36.871 net/ionic: not in enabled drivers build config 00:02:36.871 net/ipn3ke: not in enabled drivers build config 00:02:36.871 net/ixgbe: not in enabled drivers build config 00:02:36.871 net/mana: not in enabled drivers build config 00:02:36.871 net/memif: not in enabled drivers build config 00:02:36.871 net/mlx4: not in enabled drivers build config 00:02:36.871 net/mlx5: not in enabled drivers build config 00:02:36.871 net/mvneta: not in enabled drivers build config 00:02:36.871 net/mvpp2: not in enabled drivers build config 00:02:36.871 net/netvsc: not in enabled drivers build config 00:02:36.871 net/nfb: not in enabled drivers build config 00:02:36.871 net/nfp: not in enabled drivers build config 00:02:36.871 net/ngbe: not in enabled drivers build config 00:02:36.871 net/null: not in enabled drivers build config 00:02:36.871 net/octeontx: not in enabled drivers build config 00:02:36.871 net/octeon_ep: not in enabled drivers build config 00:02:36.871 net/pcap: not in enabled drivers build config 00:02:36.871 net/pfe: not in enabled drivers build config 00:02:36.871 net/qede: not in enabled drivers build config 00:02:36.871 net/ring: not in enabled drivers build config 00:02:36.871 net/sfc: not in enabled drivers build config 00:02:36.872 net/softnic: not in enabled drivers build config 00:02:36.872 net/tap: not in enabled drivers build config 00:02:36.872 net/thunderx: not in enabled drivers build config 00:02:36.872 net/txgbe: not in enabled drivers build config 00:02:36.872 net/vdev_netvsc: not in enabled drivers build config 00:02:36.872 net/vhost: not in enabled drivers build config 00:02:36.872 net/virtio: not in enabled drivers build config 00:02:36.872 net/vmxnet3: not in enabled drivers build config 00:02:36.872 raw/*: missing internal dependency, "rawdev" 00:02:36.872 crypto/armv8: not in enabled drivers build config 00:02:36.872 crypto/bcmfs: not in enabled drivers build config 00:02:36.872 crypto/caam_jr: not in enabled drivers build config 00:02:36.872 crypto/ccp: not in enabled drivers build config 00:02:36.872 crypto/cnxk: not in enabled drivers build config 00:02:36.872 crypto/dpaa_sec: not in enabled drivers build config 00:02:36.872 crypto/dpaa2_sec: not in enabled drivers build config 00:02:36.872 crypto/ipsec_mb: not in enabled drivers build config 00:02:36.872 crypto/mlx5: not in enabled drivers build config 00:02:36.872 crypto/mvsam: not in enabled drivers build config 00:02:36.872 crypto/nitrox: not in enabled drivers build config 00:02:36.872 crypto/null: not in enabled drivers build config 00:02:36.872 crypto/octeontx: not in enabled drivers build config 00:02:36.872 crypto/openssl: not in enabled drivers build config 00:02:36.872 crypto/scheduler: not in enabled drivers build config 00:02:36.872 crypto/uadk: not in enabled drivers build config 00:02:36.872 crypto/virtio: not in enabled drivers build config 00:02:36.872 compress/isal: not in enabled drivers build config 00:02:36.872 compress/mlx5: not in enabled drivers build config 00:02:36.872 compress/nitrox: not in enabled drivers build config 00:02:36.872 compress/octeontx: not in enabled drivers build config 00:02:36.872 compress/zlib: not in enabled drivers build config 00:02:36.872 regex/*: missing internal dependency, "regexdev" 00:02:36.872 ml/*: missing internal dependency, "mldev" 00:02:36.872 vdpa/ifc: not in enabled drivers build config 00:02:36.872 vdpa/mlx5: not in enabled drivers build config 00:02:36.872 vdpa/nfp: not in enabled drivers build config 00:02:36.872 vdpa/sfc: not in enabled drivers build config 00:02:36.872 event/*: missing internal dependency, "eventdev" 00:02:36.872 baseband/*: missing internal dependency, "bbdev" 00:02:36.872 gpu/*: missing internal dependency, "gpudev" 00:02:36.872 00:02:36.872 00:02:37.439 Build targets in project: 85 00:02:37.439 00:02:37.439 DPDK 24.03.0 00:02:37.439 00:02:37.439 User defined options 00:02:37.439 buildtype : debug 00:02:37.439 default_library : static 00:02:37.439 libdir : lib 00:02:37.439 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:37.439 c_args : -fPIC -Werror 00:02:37.439 c_link_args : 00:02:37.439 cpu_instruction_set: native 00:02:37.439 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:37.439 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:37.439 enable_docs : false 00:02:37.439 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:37.439 enable_kmods : false 00:02:37.439 max_lcores : 128 00:02:37.439 tests : false 00:02:37.439 00:02:37.439 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:37.709 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:37.709 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:37.709 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:37.709 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:37.709 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:37.709 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:37.709 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:37.709 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:37.709 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:37.709 [9/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:37.709 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:37.709 [11/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:37.709 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:37.709 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:37.709 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:37.709 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:37.709 [16/268] Linking static target lib/librte_log.a 00:02:37.970 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:37.970 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:37.970 [19/268] Linking static target lib/librte_kvargs.a 00:02:37.970 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:37.970 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:37.970 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:37.970 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:37.970 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:37.970 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:37.970 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:37.970 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:37.970 [28/268] Linking static target lib/librte_pci.a 00:02:37.970 [29/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:37.970 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:37.970 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:37.970 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:37.970 [33/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:37.970 [34/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:37.970 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:38.228 [36/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.228 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:38.228 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:38.228 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:38.228 [40/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:38.228 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:38.228 [42/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.229 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:38.229 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:38.229 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:38.229 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:38.229 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:38.229 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:38.229 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:38.229 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:38.229 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:38.229 [52/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:38.229 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:38.229 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:38.229 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:38.229 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:38.229 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:38.229 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:38.229 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:38.229 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:38.229 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:38.229 [62/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:38.229 [63/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:38.229 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:38.229 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:38.229 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:38.229 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:38.229 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:38.229 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:38.229 [70/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:38.229 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:38.229 [72/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:38.229 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:38.488 [74/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:38.488 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:38.488 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:38.488 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:38.488 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:38.488 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:38.488 [80/268] Linking static target lib/librte_telemetry.a 00:02:38.488 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:38.488 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:38.488 [83/268] Linking static target lib/librte_meter.a 00:02:38.488 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:38.488 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:38.488 [86/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:38.488 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:38.488 [88/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:38.488 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:38.488 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:38.488 [91/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:38.488 [92/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:38.488 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:38.488 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:38.488 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:38.488 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:38.488 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:38.488 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:38.488 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:38.488 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:38.488 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:38.488 [102/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:38.488 [103/268] Linking static target lib/librte_ring.a 00:02:38.488 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:38.488 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:38.488 [106/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:38.488 [107/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:38.488 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:38.488 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:38.488 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:38.488 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:38.488 [112/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:38.488 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:38.488 [114/268] Linking static target lib/librte_timer.a 00:02:38.488 [115/268] Linking static target lib/librte_cmdline.a 00:02:38.488 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:38.488 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:38.488 [118/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:38.488 [119/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:38.488 [120/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:38.488 [121/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:38.488 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:38.488 [123/268] Linking static target lib/librte_mempool.a 00:02:38.488 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:38.488 [125/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:38.488 [126/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:38.488 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:38.488 [128/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:38.488 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:38.488 [130/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:38.488 [131/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:38.488 [132/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:38.488 [133/268] Linking static target lib/librte_eal.a 00:02:38.488 [134/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:38.488 [135/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:38.488 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:38.488 [137/268] Linking static target lib/librte_dmadev.a 00:02:38.488 [138/268] Linking static target lib/librte_rcu.a 00:02:38.488 [139/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:38.488 [140/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:38.488 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:38.488 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:38.488 [143/268] Linking static target lib/librte_net.a 00:02:38.488 [144/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.488 [145/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:38.488 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:38.488 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:38.488 [148/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:38.488 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:38.488 [150/268] Linking static target lib/librte_compressdev.a 00:02:38.488 [151/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:38.488 [152/268] Linking target lib/librte_log.so.24.1 00:02:38.488 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:38.488 [154/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:38.747 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:38.747 [156/268] Linking static target lib/librte_mbuf.a 00:02:38.747 [157/268] Linking static target lib/librte_hash.a 00:02:38.747 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:38.747 [159/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.747 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:38.747 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:38.747 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:38.747 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:38.747 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:38.747 [165/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:38.747 [166/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.747 [167/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:38.747 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:38.747 [169/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:38.747 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:38.747 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:38.747 [172/268] Linking target lib/librte_kvargs.so.24.1 00:02:38.747 [173/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:38.747 [174/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:38.747 [175/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:38.747 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:38.747 [177/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:38.747 [178/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:38.747 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:38.747 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:38.747 [181/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:38.747 [182/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:38.747 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:38.747 [184/268] Linking static target lib/librte_cryptodev.a 00:02:38.747 [185/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.747 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:38.747 [187/268] Linking static target lib/librte_power.a 00:02:38.747 [188/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.747 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:39.007 [190/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.007 [191/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:39.007 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:39.007 [193/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:39.007 [194/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.007 [195/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:39.007 [196/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:39.007 [197/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:39.007 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:39.007 [199/268] Linking static target lib/librte_reorder.a 00:02:39.007 [200/268] Linking target lib/librte_telemetry.so.24.1 00:02:39.007 [201/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:39.007 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:39.007 [203/268] Linking static target drivers/librte_bus_vdev.a 00:02:39.007 [204/268] Linking static target lib/librte_security.a 00:02:39.007 [205/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:39.007 [206/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:39.007 [207/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:39.007 [208/268] Linking static target drivers/librte_mempool_ring.a 00:02:39.007 [209/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:39.007 [210/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:39.007 [211/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:39.266 [212/268] Linking static target lib/librte_ethdev.a 00:02:39.266 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:39.266 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:39.266 [215/268] Linking static target drivers/librte_bus_pci.a 00:02:39.266 [216/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.266 [217/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:39.266 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.525 [219/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.525 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.525 [221/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.525 [222/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.525 [223/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.784 [224/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.784 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.784 [226/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:39.784 [227/268] Linking static target lib/librte_vhost.a 00:02:39.784 [228/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.043 [229/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.979 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.915 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.039 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.975 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.975 [234/268] Linking target lib/librte_eal.so.24.1 00:02:51.233 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:51.233 [236/268] Linking target lib/librte_pci.so.24.1 00:02:51.233 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:51.233 [238/268] Linking target lib/librte_meter.so.24.1 00:02:51.233 [239/268] Linking target lib/librte_ring.so.24.1 00:02:51.233 [240/268] Linking target lib/librte_timer.so.24.1 00:02:51.233 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:51.492 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:51.492 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:51.492 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:51.492 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:51.492 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:51.492 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:51.492 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:51.492 [249/268] Linking target lib/librte_rcu.so.24.1 00:02:51.492 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:51.492 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:51.751 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:51.751 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:51.751 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:51.751 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:51.751 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:51.751 [257/268] Linking target lib/librte_net.so.24.1 00:02:51.751 [258/268] Linking target lib/librte_reorder.so.24.1 00:02:52.009 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:52.009 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:52.009 [261/268] Linking target lib/librte_hash.so.24.1 00:02:52.009 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:52.009 [263/268] Linking target lib/librte_security.so.24.1 00:02:52.009 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:52.268 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:52.268 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:52.268 [267/268] Linking target lib/librte_power.so.24.1 00:02:52.268 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:52.268 INFO: autodetecting backend as ninja 00:02:52.268 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:53.204 CC lib/ut/ut.o 00:02:53.462 CC lib/ut_mock/mock.o 00:02:53.462 CC lib/log/log.o 00:02:53.463 CC lib/log/log_flags.o 00:02:53.463 CC lib/log/log_deprecated.o 00:02:53.463 LIB libspdk_ut.a 00:02:53.463 LIB libspdk_ut_mock.a 00:02:53.463 LIB libspdk_log.a 00:02:53.721 CC lib/ioat/ioat.o 00:02:53.721 CC lib/dma/dma.o 00:02:53.721 CC lib/util/base64.o 00:02:53.721 CC lib/util/bit_array.o 00:02:53.980 CC lib/util/cpuset.o 00:02:53.980 CXX lib/trace_parser/trace.o 00:02:53.980 CC lib/util/crc16.o 00:02:53.980 CC lib/util/crc32_ieee.o 00:02:53.980 CC lib/util/crc32.o 00:02:53.980 CC lib/util/crc32c.o 00:02:53.980 CC lib/util/crc64.o 00:02:53.980 CC lib/util/dif.o 00:02:53.980 CC lib/util/fd.o 00:02:53.980 CC lib/util/fd_group.o 00:02:53.980 CC lib/util/file.o 00:02:53.980 CC lib/util/math.o 00:02:53.980 CC lib/util/hexlify.o 00:02:53.980 CC lib/util/iov.o 00:02:53.980 CC lib/util/net.o 00:02:53.980 CC lib/util/string.o 00:02:53.980 CC lib/util/pipe.o 00:02:53.980 CC lib/util/strerror_tls.o 00:02:53.980 CC lib/util/xor.o 00:02:53.980 CC lib/util/uuid.o 00:02:53.980 CC lib/util/zipf.o 00:02:53.980 CC lib/util/md5.o 00:02:53.980 LIB libspdk_dma.a 00:02:53.980 CC lib/vfio_user/host/vfio_user_pci.o 00:02:53.980 CC lib/vfio_user/host/vfio_user.o 00:02:53.980 LIB libspdk_ioat.a 00:02:54.240 LIB libspdk_vfio_user.a 00:02:54.240 LIB libspdk_util.a 00:02:54.240 LIB libspdk_trace_parser.a 00:02:54.498 CC lib/vmd/vmd.o 00:02:54.498 CC lib/vmd/led.o 00:02:54.498 CC lib/env_dpdk/pci.o 00:02:54.498 CC lib/json/json_parse.o 00:02:54.498 CC lib/env_dpdk/env.o 00:02:54.498 CC lib/env_dpdk/threads.o 00:02:54.498 CC lib/json/json_util.o 00:02:54.498 CC lib/json/json_write.o 00:02:54.498 CC lib/env_dpdk/memory.o 00:02:54.498 CC lib/env_dpdk/init.o 00:02:54.498 CC lib/env_dpdk/pci_ioat.o 00:02:54.498 CC lib/env_dpdk/pci_virtio.o 00:02:54.498 CC lib/env_dpdk/pci_vmd.o 00:02:54.498 CC lib/env_dpdk/pci_idxd.o 00:02:54.498 CC lib/env_dpdk/pci_event.o 00:02:54.498 CC lib/env_dpdk/sigbus_handler.o 00:02:54.498 CC lib/env_dpdk/pci_dpdk.o 00:02:54.498 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:54.498 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:54.498 CC lib/conf/conf.o 00:02:54.498 CC lib/idxd/idxd.o 00:02:54.498 CC lib/rdma_utils/rdma_utils.o 00:02:54.498 CC lib/idxd/idxd_user.o 00:02:54.498 CC lib/idxd/idxd_kernel.o 00:02:54.757 LIB libspdk_conf.a 00:02:54.757 LIB libspdk_json.a 00:02:54.757 LIB libspdk_rdma_utils.a 00:02:54.757 LIB libspdk_vmd.a 00:02:54.757 LIB libspdk_idxd.a 00:02:55.015 CC lib/jsonrpc/jsonrpc_server.o 00:02:55.015 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:55.015 CC lib/jsonrpc/jsonrpc_client.o 00:02:55.015 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:55.015 CC lib/rdma_provider/common.o 00:02:55.015 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:55.274 LIB libspdk_jsonrpc.a 00:02:55.274 LIB libspdk_rdma_provider.a 00:02:55.532 LIB libspdk_env_dpdk.a 00:02:55.532 CC lib/rpc/rpc.o 00:02:55.790 LIB libspdk_rpc.a 00:02:56.049 CC lib/keyring/keyring.o 00:02:56.049 CC lib/keyring/keyring_rpc.o 00:02:56.049 CC lib/notify/notify.o 00:02:56.049 CC lib/notify/notify_rpc.o 00:02:56.049 CC lib/trace/trace_rpc.o 00:02:56.049 CC lib/trace/trace.o 00:02:56.049 CC lib/trace/trace_flags.o 00:02:56.049 LIB libspdk_notify.a 00:02:56.049 LIB libspdk_keyring.a 00:02:56.308 LIB libspdk_trace.a 00:02:56.567 CC lib/thread/thread.o 00:02:56.567 CC lib/thread/iobuf.o 00:02:56.567 CC lib/sock/sock.o 00:02:56.567 CC lib/sock/sock_rpc.o 00:02:56.826 LIB libspdk_sock.a 00:02:57.085 CC lib/nvme/nvme_ctrlr.o 00:02:57.085 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:57.085 CC lib/nvme/nvme_ns_cmd.o 00:02:57.085 CC lib/nvme/nvme_fabric.o 00:02:57.085 CC lib/nvme/nvme_pcie_common.o 00:02:57.085 CC lib/nvme/nvme_ns.o 00:02:57.085 CC lib/nvme/nvme_qpair.o 00:02:57.085 CC lib/nvme/nvme_pcie.o 00:02:57.085 CC lib/nvme/nvme_quirks.o 00:02:57.085 CC lib/nvme/nvme_transport.o 00:02:57.085 CC lib/nvme/nvme.o 00:02:57.085 CC lib/nvme/nvme_discovery.o 00:02:57.085 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:57.085 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:57.085 CC lib/nvme/nvme_opal.o 00:02:57.085 CC lib/nvme/nvme_tcp.o 00:02:57.085 CC lib/nvme/nvme_io_msg.o 00:02:57.085 CC lib/nvme/nvme_poll_group.o 00:02:57.085 CC lib/nvme/nvme_zns.o 00:02:57.085 CC lib/nvme/nvme_stubs.o 00:02:57.085 CC lib/nvme/nvme_auth.o 00:02:57.085 CC lib/nvme/nvme_cuse.o 00:02:57.085 CC lib/nvme/nvme_vfio_user.o 00:02:57.085 CC lib/nvme/nvme_rdma.o 00:02:57.344 LIB libspdk_thread.a 00:02:57.603 CC lib/virtio/virtio_vfio_user.o 00:02:57.603 CC lib/virtio/virtio.o 00:02:57.603 CC lib/virtio/virtio_vhost_user.o 00:02:57.603 CC lib/virtio/virtio_pci.o 00:02:57.603 CC lib/accel/accel.o 00:02:57.603 CC lib/accel/accel_rpc.o 00:02:57.603 CC lib/accel/accel_sw.o 00:02:57.603 CC lib/blob/blobstore.o 00:02:57.603 CC lib/blob/request.o 00:02:57.603 CC lib/blob/zeroes.o 00:02:57.603 CC lib/blob/blob_bs_dev.o 00:02:57.603 CC lib/fsdev/fsdev_io.o 00:02:57.603 CC lib/fsdev/fsdev.o 00:02:57.603 CC lib/fsdev/fsdev_rpc.o 00:02:57.603 CC lib/init/subsystem_rpc.o 00:02:57.603 CC lib/init/json_config.o 00:02:57.603 CC lib/init/rpc.o 00:02:57.603 CC lib/init/subsystem.o 00:02:57.603 CC lib/vfu_tgt/tgt_endpoint.o 00:02:57.603 CC lib/vfu_tgt/tgt_rpc.o 00:02:57.861 LIB libspdk_init.a 00:02:57.861 LIB libspdk_virtio.a 00:02:57.862 LIB libspdk_vfu_tgt.a 00:02:58.120 LIB libspdk_fsdev.a 00:02:58.120 CC lib/event/app.o 00:02:58.120 CC lib/event/reactor.o 00:02:58.120 CC lib/event/log_rpc.o 00:02:58.120 CC lib/event/app_rpc.o 00:02:58.120 CC lib/event/scheduler_static.o 00:02:58.379 LIB libspdk_accel.a 00:02:58.379 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:58.379 LIB libspdk_event.a 00:02:58.379 LIB libspdk_nvme.a 00:02:58.638 CC lib/bdev/bdev.o 00:02:58.638 CC lib/bdev/bdev_rpc.o 00:02:58.638 CC lib/bdev/bdev_zone.o 00:02:58.638 CC lib/bdev/part.o 00:02:58.638 CC lib/bdev/scsi_nvme.o 00:02:58.638 LIB libspdk_fuse_dispatcher.a 00:02:59.205 LIB libspdk_blob.a 00:02:59.774 CC lib/blobfs/blobfs.o 00:02:59.774 CC lib/blobfs/tree.o 00:02:59.774 CC lib/lvol/lvol.o 00:03:00.034 LIB libspdk_lvol.a 00:03:00.293 LIB libspdk_blobfs.a 00:03:00.293 LIB libspdk_bdev.a 00:03:00.862 CC lib/nbd/nbd.o 00:03:00.862 CC lib/nbd/nbd_rpc.o 00:03:00.862 CC lib/ublk/ublk.o 00:03:00.862 CC lib/ublk/ublk_rpc.o 00:03:00.862 CC lib/scsi/dev.o 00:03:00.862 CC lib/nvmf/ctrlr_discovery.o 00:03:00.862 CC lib/ftl/ftl_core.o 00:03:00.862 CC lib/scsi/lun.o 00:03:00.862 CC lib/nvmf/ctrlr.o 00:03:00.862 CC lib/ftl/ftl_init.o 00:03:00.862 CC lib/scsi/port.o 00:03:00.862 CC lib/ftl/ftl_layout.o 00:03:00.862 CC lib/scsi/scsi.o 00:03:00.862 CC lib/ftl/ftl_debug.o 00:03:00.862 CC lib/nvmf/ctrlr_bdev.o 00:03:00.862 CC lib/nvmf/subsystem.o 00:03:00.862 CC lib/scsi/scsi_bdev.o 00:03:00.862 CC lib/ftl/ftl_io.o 00:03:00.862 CC lib/scsi/scsi_pr.o 00:03:00.862 CC lib/nvmf/nvmf.o 00:03:00.862 CC lib/ftl/ftl_sb.o 00:03:00.862 CC lib/scsi/scsi_rpc.o 00:03:00.862 CC lib/nvmf/tcp.o 00:03:00.862 CC lib/nvmf/nvmf_rpc.o 00:03:00.862 CC lib/scsi/task.o 00:03:00.862 CC lib/ftl/ftl_l2p.o 00:03:00.862 CC lib/nvmf/transport.o 00:03:00.862 CC lib/ftl/ftl_l2p_flat.o 00:03:00.862 CC lib/ftl/ftl_nv_cache.o 00:03:00.862 CC lib/nvmf/stubs.o 00:03:00.862 CC lib/nvmf/mdns_server.o 00:03:00.862 CC lib/ftl/ftl_band.o 00:03:00.862 CC lib/nvmf/vfio_user.o 00:03:00.862 CC lib/ftl/ftl_band_ops.o 00:03:00.862 CC lib/nvmf/rdma.o 00:03:00.862 CC lib/ftl/ftl_rq.o 00:03:00.862 CC lib/ftl/ftl_writer.o 00:03:00.862 CC lib/nvmf/auth.o 00:03:00.862 CC lib/ftl/ftl_reloc.o 00:03:00.862 CC lib/ftl/ftl_l2p_cache.o 00:03:00.862 CC lib/ftl/ftl_p2l_log.o 00:03:00.862 CC lib/ftl/ftl_p2l.o 00:03:00.862 CC lib/ftl/mngt/ftl_mngt.o 00:03:00.862 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:00.862 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:00.862 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:00.862 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:00.862 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:00.862 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:00.862 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:00.862 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:00.862 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:00.862 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:00.862 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:00.862 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:00.862 CC lib/ftl/utils/ftl_md.o 00:03:00.862 CC lib/ftl/utils/ftl_conf.o 00:03:00.862 CC lib/ftl/utils/ftl_mempool.o 00:03:00.862 CC lib/ftl/utils/ftl_bitmap.o 00:03:00.862 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:00.862 CC lib/ftl/utils/ftl_property.o 00:03:00.862 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:00.862 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:00.862 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:00.862 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:00.862 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:00.862 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:00.862 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:00.862 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:00.862 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:00.862 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:00.862 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:00.862 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:00.862 CC lib/ftl/base/ftl_base_dev.o 00:03:00.862 CC lib/ftl/base/ftl_base_bdev.o 00:03:00.862 CC lib/ftl/ftl_trace.o 00:03:01.121 LIB libspdk_nbd.a 00:03:01.121 LIB libspdk_scsi.a 00:03:01.121 LIB libspdk_ublk.a 00:03:01.380 LIB libspdk_ftl.a 00:03:01.381 CC lib/iscsi/param.o 00:03:01.381 CC lib/iscsi/conn.o 00:03:01.381 CC lib/iscsi/iscsi.o 00:03:01.381 CC lib/iscsi/init_grp.o 00:03:01.381 CC lib/iscsi/task.o 00:03:01.381 CC lib/iscsi/portal_grp.o 00:03:01.381 CC lib/iscsi/tgt_node.o 00:03:01.381 CC lib/iscsi/iscsi_subsystem.o 00:03:01.381 CC lib/iscsi/iscsi_rpc.o 00:03:01.639 CC lib/vhost/vhost.o 00:03:01.639 CC lib/vhost/vhost_rpc.o 00:03:01.639 CC lib/vhost/vhost_scsi.o 00:03:01.639 CC lib/vhost/vhost_blk.o 00:03:01.639 CC lib/vhost/rte_vhost_user.o 00:03:01.899 LIB libspdk_nvmf.a 00:03:02.158 LIB libspdk_vhost.a 00:03:02.158 LIB libspdk_iscsi.a 00:03:02.727 CC module/vfu_device/vfu_virtio.o 00:03:02.727 CC module/vfu_device/vfu_virtio_blk.o 00:03:02.727 CC module/vfu_device/vfu_virtio_scsi.o 00:03:02.727 CC module/vfu_device/vfu_virtio_rpc.o 00:03:02.727 CC module/vfu_device/vfu_virtio_fs.o 00:03:02.727 CC module/env_dpdk/env_dpdk_rpc.o 00:03:02.727 CC module/accel/dsa/accel_dsa.o 00:03:02.727 CC module/accel/dsa/accel_dsa_rpc.o 00:03:02.727 LIB libspdk_env_dpdk_rpc.a 00:03:02.986 CC module/accel/iaa/accel_iaa.o 00:03:02.986 CC module/accel/iaa/accel_iaa_rpc.o 00:03:02.986 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:02.986 CC module/fsdev/aio/fsdev_aio.o 00:03:02.986 CC module/sock/posix/posix.o 00:03:02.986 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:02.986 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:02.986 CC module/fsdev/aio/linux_aio_mgr.o 00:03:02.986 CC module/accel/ioat/accel_ioat.o 00:03:02.986 CC module/accel/ioat/accel_ioat_rpc.o 00:03:02.986 CC module/keyring/linux/keyring.o 00:03:02.986 CC module/scheduler/gscheduler/gscheduler.o 00:03:02.986 CC module/keyring/linux/keyring_rpc.o 00:03:02.986 CC module/accel/error/accel_error.o 00:03:02.986 CC module/accel/error/accel_error_rpc.o 00:03:02.986 CC module/keyring/file/keyring.o 00:03:02.986 CC module/keyring/file/keyring_rpc.o 00:03:02.986 CC module/blob/bdev/blob_bdev.o 00:03:02.986 LIB libspdk_scheduler_dpdk_governor.a 00:03:02.986 LIB libspdk_keyring_linux.a 00:03:02.986 LIB libspdk_keyring_file.a 00:03:02.986 LIB libspdk_scheduler_gscheduler.a 00:03:02.986 LIB libspdk_accel_iaa.a 00:03:02.986 LIB libspdk_scheduler_dynamic.a 00:03:02.986 LIB libspdk_accel_ioat.a 00:03:02.986 LIB libspdk_accel_error.a 00:03:02.986 LIB libspdk_accel_dsa.a 00:03:02.986 LIB libspdk_blob_bdev.a 00:03:03.245 LIB libspdk_vfu_device.a 00:03:03.245 LIB libspdk_sock_posix.a 00:03:03.245 LIB libspdk_fsdev_aio.a 00:03:03.504 CC module/bdev/nvme/nvme_rpc.o 00:03:03.504 CC module/bdev/nvme/bdev_nvme.o 00:03:03.504 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:03.504 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:03.505 CC module/bdev/nvme/vbdev_opal.o 00:03:03.505 CC module/bdev/nvme/bdev_mdns_client.o 00:03:03.505 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:03.505 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:03.505 CC module/bdev/passthru/vbdev_passthru.o 00:03:03.505 CC module/bdev/raid/bdev_raid.o 00:03:03.505 CC module/bdev/raid/bdev_raid_rpc.o 00:03:03.505 CC module/bdev/raid/raid1.o 00:03:03.505 CC module/bdev/raid/bdev_raid_sb.o 00:03:03.505 CC module/bdev/raid/raid0.o 00:03:03.505 CC module/bdev/raid/concat.o 00:03:03.505 CC module/bdev/lvol/vbdev_lvol.o 00:03:03.505 CC module/bdev/error/vbdev_error.o 00:03:03.505 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:03.505 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:03.505 CC module/bdev/delay/vbdev_delay.o 00:03:03.505 CC module/bdev/error/vbdev_error_rpc.o 00:03:03.505 CC module/bdev/null/bdev_null.o 00:03:03.505 CC module/bdev/null/bdev_null_rpc.o 00:03:03.505 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:03.505 CC module/bdev/iscsi/bdev_iscsi.o 00:03:03.505 CC module/bdev/gpt/gpt.o 00:03:03.505 CC module/bdev/aio/bdev_aio.o 00:03:03.505 CC module/bdev/malloc/bdev_malloc.o 00:03:03.505 CC module/bdev/aio/bdev_aio_rpc.o 00:03:03.505 CC module/bdev/gpt/vbdev_gpt.o 00:03:03.505 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:03.505 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:03.505 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:03.505 CC module/blobfs/bdev/blobfs_bdev.o 00:03:03.505 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:03.505 CC module/bdev/ftl/bdev_ftl.o 00:03:03.505 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:03.505 CC module/bdev/split/vbdev_split.o 00:03:03.505 CC module/bdev/split/vbdev_split_rpc.o 00:03:03.505 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:03.505 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:03.505 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:03.763 LIB libspdk_blobfs_bdev.a 00:03:03.763 LIB libspdk_bdev_split.a 00:03:03.764 LIB libspdk_bdev_passthru.a 00:03:03.764 LIB libspdk_bdev_error.a 00:03:03.764 LIB libspdk_bdev_null.a 00:03:03.764 LIB libspdk_bdev_gpt.a 00:03:03.764 LIB libspdk_bdev_ftl.a 00:03:03.764 LIB libspdk_bdev_zone_block.a 00:03:03.764 LIB libspdk_bdev_aio.a 00:03:03.764 LIB libspdk_bdev_iscsi.a 00:03:03.764 LIB libspdk_bdev_delay.a 00:03:03.764 LIB libspdk_bdev_malloc.a 00:03:04.023 LIB libspdk_bdev_lvol.a 00:03:04.023 LIB libspdk_bdev_virtio.a 00:03:04.280 LIB libspdk_bdev_raid.a 00:03:04.923 LIB libspdk_bdev_nvme.a 00:03:05.491 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:05.491 CC module/event/subsystems/scheduler/scheduler.o 00:03:05.750 CC module/event/subsystems/sock/sock.o 00:03:05.750 CC module/event/subsystems/vmd/vmd.o 00:03:05.750 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:05.750 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:05.750 CC module/event/subsystems/iobuf/iobuf.o 00:03:05.750 CC module/event/subsystems/keyring/keyring.o 00:03:05.750 CC module/event/subsystems/fsdev/fsdev.o 00:03:05.750 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:05.750 LIB libspdk_event_vfu_tgt.a 00:03:05.750 LIB libspdk_event_scheduler.a 00:03:05.750 LIB libspdk_event_sock.a 00:03:05.750 LIB libspdk_event_keyring.a 00:03:05.750 LIB libspdk_event_vmd.a 00:03:05.750 LIB libspdk_event_iobuf.a 00:03:05.750 LIB libspdk_event_fsdev.a 00:03:05.750 LIB libspdk_event_vhost_blk.a 00:03:06.009 CC module/event/subsystems/accel/accel.o 00:03:06.268 LIB libspdk_event_accel.a 00:03:06.527 CC module/event/subsystems/bdev/bdev.o 00:03:06.787 LIB libspdk_event_bdev.a 00:03:07.046 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:07.046 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:07.046 CC module/event/subsystems/scsi/scsi.o 00:03:07.046 CC module/event/subsystems/ublk/ublk.o 00:03:07.046 CC module/event/subsystems/nbd/nbd.o 00:03:07.046 LIB libspdk_event_scsi.a 00:03:07.046 LIB libspdk_event_nbd.a 00:03:07.046 LIB libspdk_event_ublk.a 00:03:07.046 LIB libspdk_event_nvmf.a 00:03:07.614 CC module/event/subsystems/iscsi/iscsi.o 00:03:07.614 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:07.614 LIB libspdk_event_vhost_scsi.a 00:03:07.614 LIB libspdk_event_iscsi.a 00:03:07.871 CC app/spdk_nvme_discover/discovery_aer.o 00:03:07.872 CC app/spdk_top/spdk_top.o 00:03:07.872 CC app/spdk_lspci/spdk_lspci.o 00:03:07.872 CXX app/trace/trace.o 00:03:07.872 TEST_HEADER include/spdk/accel.h 00:03:07.872 TEST_HEADER include/spdk/accel_module.h 00:03:07.872 TEST_HEADER include/spdk/assert.h 00:03:07.872 TEST_HEADER include/spdk/barrier.h 00:03:07.872 CC app/spdk_nvme_perf/perf.o 00:03:07.872 TEST_HEADER include/spdk/base64.h 00:03:07.872 TEST_HEADER include/spdk/bdev.h 00:03:07.872 TEST_HEADER include/spdk/bdev_module.h 00:03:07.872 CC app/trace_record/trace_record.o 00:03:07.872 TEST_HEADER include/spdk/bdev_zone.h 00:03:07.872 TEST_HEADER include/spdk/bit_array.h 00:03:07.872 TEST_HEADER include/spdk/blobfs.h 00:03:07.872 TEST_HEADER include/spdk/bit_pool.h 00:03:07.872 TEST_HEADER include/spdk/blob_bdev.h 00:03:07.872 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:07.872 TEST_HEADER include/spdk/config.h 00:03:07.872 TEST_HEADER include/spdk/crc16.h 00:03:07.872 TEST_HEADER include/spdk/blob.h 00:03:07.872 TEST_HEADER include/spdk/conf.h 00:03:07.872 CC app/spdk_nvme_identify/identify.o 00:03:07.872 CC test/rpc_client/rpc_client_test.o 00:03:07.872 TEST_HEADER include/spdk/crc32.h 00:03:07.872 TEST_HEADER include/spdk/cpuset.h 00:03:07.872 TEST_HEADER include/spdk/dif.h 00:03:07.872 TEST_HEADER include/spdk/crc64.h 00:03:07.872 TEST_HEADER include/spdk/dma.h 00:03:07.872 TEST_HEADER include/spdk/endian.h 00:03:07.872 TEST_HEADER include/spdk/event.h 00:03:07.872 TEST_HEADER include/spdk/env_dpdk.h 00:03:07.872 TEST_HEADER include/spdk/env.h 00:03:07.872 TEST_HEADER include/spdk/file.h 00:03:07.872 TEST_HEADER include/spdk/fd_group.h 00:03:07.872 TEST_HEADER include/spdk/fd.h 00:03:07.872 TEST_HEADER include/spdk/fsdev.h 00:03:07.872 TEST_HEADER include/spdk/ftl.h 00:03:07.872 TEST_HEADER include/spdk/fsdev_module.h 00:03:07.872 TEST_HEADER include/spdk/histogram_data.h 00:03:07.872 TEST_HEADER include/spdk/gpt_spec.h 00:03:07.872 TEST_HEADER include/spdk/hexlify.h 00:03:07.872 TEST_HEADER include/spdk/init.h 00:03:07.872 TEST_HEADER include/spdk/idxd_spec.h 00:03:07.872 TEST_HEADER include/spdk/ioat.h 00:03:07.872 TEST_HEADER include/spdk/ioat_spec.h 00:03:07.872 TEST_HEADER include/spdk/idxd.h 00:03:07.872 TEST_HEADER include/spdk/keyring.h 00:03:07.872 TEST_HEADER include/spdk/iscsi_spec.h 00:03:07.872 TEST_HEADER include/spdk/json.h 00:03:07.872 TEST_HEADER include/spdk/likely.h 00:03:07.872 TEST_HEADER include/spdk/jsonrpc.h 00:03:07.872 TEST_HEADER include/spdk/keyring_module.h 00:03:07.872 CC app/iscsi_tgt/iscsi_tgt.o 00:03:07.872 TEST_HEADER include/spdk/log.h 00:03:08.136 TEST_HEADER include/spdk/md5.h 00:03:08.136 TEST_HEADER include/spdk/lvol.h 00:03:08.136 TEST_HEADER include/spdk/memory.h 00:03:08.136 TEST_HEADER include/spdk/mmio.h 00:03:08.136 TEST_HEADER include/spdk/nbd.h 00:03:08.136 TEST_HEADER include/spdk/notify.h 00:03:08.136 TEST_HEADER include/spdk/nvme.h 00:03:08.136 TEST_HEADER include/spdk/net.h 00:03:08.136 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:08.136 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:08.136 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:08.136 CC app/spdk_dd/spdk_dd.o 00:03:08.136 TEST_HEADER include/spdk/nvme_intel.h 00:03:08.136 TEST_HEADER include/spdk/nvme_spec.h 00:03:08.136 TEST_HEADER include/spdk/nvme_zns.h 00:03:08.136 CC app/nvmf_tgt/nvmf_main.o 00:03:08.136 TEST_HEADER include/spdk/nvmf.h 00:03:08.136 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:08.136 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:08.136 TEST_HEADER include/spdk/nvmf_spec.h 00:03:08.136 TEST_HEADER include/spdk/nvmf_transport.h 00:03:08.136 TEST_HEADER include/spdk/opal.h 00:03:08.136 TEST_HEADER include/spdk/pci_ids.h 00:03:08.136 TEST_HEADER include/spdk/opal_spec.h 00:03:08.136 TEST_HEADER include/spdk/queue.h 00:03:08.136 TEST_HEADER include/spdk/pipe.h 00:03:08.136 TEST_HEADER include/spdk/reduce.h 00:03:08.136 TEST_HEADER include/spdk/scheduler.h 00:03:08.136 TEST_HEADER include/spdk/rpc.h 00:03:08.136 TEST_HEADER include/spdk/scsi_spec.h 00:03:08.136 TEST_HEADER include/spdk/scsi.h 00:03:08.136 TEST_HEADER include/spdk/sock.h 00:03:08.136 TEST_HEADER include/spdk/string.h 00:03:08.136 TEST_HEADER include/spdk/thread.h 00:03:08.136 TEST_HEADER include/spdk/stdinc.h 00:03:08.136 TEST_HEADER include/spdk/trace.h 00:03:08.136 TEST_HEADER include/spdk/tree.h 00:03:08.136 TEST_HEADER include/spdk/ublk.h 00:03:08.136 TEST_HEADER include/spdk/trace_parser.h 00:03:08.136 TEST_HEADER include/spdk/util.h 00:03:08.136 TEST_HEADER include/spdk/version.h 00:03:08.136 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:08.136 TEST_HEADER include/spdk/uuid.h 00:03:08.136 TEST_HEADER include/spdk/vhost.h 00:03:08.136 TEST_HEADER include/spdk/vmd.h 00:03:08.136 TEST_HEADER include/spdk/zipf.h 00:03:08.136 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:08.136 TEST_HEADER include/spdk/xor.h 00:03:08.136 CXX test/cpp_headers/accel_module.o 00:03:08.136 CXX test/cpp_headers/assert.o 00:03:08.136 CXX test/cpp_headers/barrier.o 00:03:08.136 CXX test/cpp_headers/accel.o 00:03:08.136 CXX test/cpp_headers/base64.o 00:03:08.136 CXX test/cpp_headers/bdev.o 00:03:08.136 CXX test/cpp_headers/bdev_zone.o 00:03:08.136 CXX test/cpp_headers/bdev_module.o 00:03:08.136 CXX test/cpp_headers/bit_array.o 00:03:08.136 CXX test/cpp_headers/blob_bdev.o 00:03:08.136 CXX test/cpp_headers/blobfs.o 00:03:08.136 CXX test/cpp_headers/blobfs_bdev.o 00:03:08.136 CXX test/cpp_headers/bit_pool.o 00:03:08.136 CXX test/cpp_headers/blob.o 00:03:08.136 CXX test/cpp_headers/config.o 00:03:08.136 CXX test/cpp_headers/cpuset.o 00:03:08.136 CXX test/cpp_headers/crc16.o 00:03:08.136 CC app/spdk_tgt/spdk_tgt.o 00:03:08.136 CXX test/cpp_headers/conf.o 00:03:08.136 CXX test/cpp_headers/crc32.o 00:03:08.136 CXX test/cpp_headers/crc64.o 00:03:08.136 CXX test/cpp_headers/endian.o 00:03:08.136 CXX test/cpp_headers/dif.o 00:03:08.136 CXX test/cpp_headers/dma.o 00:03:08.136 CXX test/cpp_headers/env.o 00:03:08.136 CXX test/cpp_headers/env_dpdk.o 00:03:08.136 CXX test/cpp_headers/fd_group.o 00:03:08.136 CXX test/cpp_headers/event.o 00:03:08.136 CXX test/cpp_headers/file.o 00:03:08.136 CXX test/cpp_headers/fd.o 00:03:08.136 CXX test/cpp_headers/fsdev.o 00:03:08.136 CXX test/cpp_headers/fsdev_module.o 00:03:08.136 CXX test/cpp_headers/ftl.o 00:03:08.136 CXX test/cpp_headers/gpt_spec.o 00:03:08.136 CXX test/cpp_headers/hexlify.o 00:03:08.136 CXX test/cpp_headers/idxd.o 00:03:08.136 CXX test/cpp_headers/histogram_data.o 00:03:08.136 CXX test/cpp_headers/idxd_spec.o 00:03:08.136 CXX test/cpp_headers/ioat.o 00:03:08.136 CXX test/cpp_headers/init.o 00:03:08.136 CXX test/cpp_headers/iscsi_spec.o 00:03:08.136 CXX test/cpp_headers/json.o 00:03:08.136 CXX test/cpp_headers/ioat_spec.o 00:03:08.136 CXX test/cpp_headers/jsonrpc.o 00:03:08.136 CXX test/cpp_headers/keyring.o 00:03:08.136 CXX test/cpp_headers/likely.o 00:03:08.136 CXX test/cpp_headers/keyring_module.o 00:03:08.136 CXX test/cpp_headers/log.o 00:03:08.136 CXX test/cpp_headers/lvol.o 00:03:08.136 CXX test/cpp_headers/memory.o 00:03:08.136 CXX test/cpp_headers/md5.o 00:03:08.136 CXX test/cpp_headers/mmio.o 00:03:08.136 CXX test/cpp_headers/nbd.o 00:03:08.136 CXX test/cpp_headers/net.o 00:03:08.136 CXX test/cpp_headers/notify.o 00:03:08.136 CXX test/cpp_headers/nvme.o 00:03:08.136 CXX test/cpp_headers/nvme_intel.o 00:03:08.136 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:08.136 CXX test/cpp_headers/nvme_ocssd.o 00:03:08.136 CXX test/cpp_headers/nvme_spec.o 00:03:08.136 CXX test/cpp_headers/nvme_zns.o 00:03:08.136 CXX test/cpp_headers/nvmf_cmd.o 00:03:08.136 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:08.136 CXX test/cpp_headers/nvmf.o 00:03:08.136 CXX test/cpp_headers/nvmf_spec.o 00:03:08.136 CC test/env/vtophys/vtophys.o 00:03:08.136 CXX test/cpp_headers/nvmf_transport.o 00:03:08.136 CXX test/cpp_headers/opal_spec.o 00:03:08.136 CXX test/cpp_headers/opal.o 00:03:08.136 CXX test/cpp_headers/pci_ids.o 00:03:08.136 CXX test/cpp_headers/pipe.o 00:03:08.136 CXX test/cpp_headers/queue.o 00:03:08.136 CXX test/cpp_headers/reduce.o 00:03:08.136 CXX test/cpp_headers/rpc.o 00:03:08.136 CC examples/ioat/perf/perf.o 00:03:08.136 CXX test/cpp_headers/scheduler.o 00:03:08.136 CXX test/cpp_headers/scsi_spec.o 00:03:08.136 CC examples/ioat/verify/verify.o 00:03:08.136 CXX test/cpp_headers/sock.o 00:03:08.136 CXX test/cpp_headers/scsi.o 00:03:08.136 CXX test/cpp_headers/stdinc.o 00:03:08.136 CXX test/cpp_headers/string.o 00:03:08.136 CXX test/cpp_headers/thread.o 00:03:08.136 CXX test/cpp_headers/trace.o 00:03:08.136 CC test/thread/lock/spdk_lock.o 00:03:08.136 CC test/env/memory/memory_ut.o 00:03:08.136 CXX test/cpp_headers/trace_parser.o 00:03:08.136 CC app/fio/nvme/fio_plugin.o 00:03:08.136 CXX test/cpp_headers/tree.o 00:03:08.136 CC test/app/histogram_perf/histogram_perf.o 00:03:08.136 CC test/app/jsoncat/jsoncat.o 00:03:08.136 CC test/thread/poller_perf/poller_perf.o 00:03:08.136 CC test/env/pci/pci_ut.o 00:03:08.136 CC examples/util/zipf/zipf.o 00:03:08.136 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:08.136 CC test/app/stub/stub.o 00:03:08.136 CXX test/cpp_headers/ublk.o 00:03:08.136 LINK spdk_lspci 00:03:08.136 LINK rpc_client_test 00:03:08.136 LINK spdk_nvme_discover 00:03:08.136 CC test/dma/test_dma/test_dma.o 00:03:08.136 CC app/fio/bdev/fio_plugin.o 00:03:08.136 CC test/app/bdev_svc/bdev_svc.o 00:03:08.136 CC test/env/mem_callbacks/mem_callbacks.o 00:03:08.136 LINK spdk_trace_record 00:03:08.136 LINK interrupt_tgt 00:03:08.136 CXX test/cpp_headers/util.o 00:03:08.396 LINK nvmf_tgt 00:03:08.396 CXX test/cpp_headers/uuid.o 00:03:08.396 LINK iscsi_tgt 00:03:08.396 CXX test/cpp_headers/version.o 00:03:08.396 CXX test/cpp_headers/vfio_user_pci.o 00:03:08.396 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:08.396 CXX test/cpp_headers/vfio_user_spec.o 00:03:08.396 CXX test/cpp_headers/vhost.o 00:03:08.396 CXX test/cpp_headers/vmd.o 00:03:08.396 CXX test/cpp_headers/xor.o 00:03:08.396 CXX test/cpp_headers/zipf.o 00:03:08.396 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:08.396 LINK vtophys 00:03:08.396 LINK jsoncat 00:03:08.396 LINK histogram_perf 00:03:08.397 LINK poller_perf 00:03:08.397 LINK zipf 00:03:08.397 LINK env_dpdk_post_init 00:03:08.397 LINK stub 00:03:08.397 LINK verify 00:03:08.397 LINK spdk_tgt 00:03:08.397 LINK ioat_perf 00:03:08.397 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:03:08.397 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:08.397 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:08.397 LINK bdev_svc 00:03:08.397 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:03:08.397 LINK spdk_trace 00:03:08.397 LINK spdk_dd 00:03:08.397 LINK pci_ut 00:03:08.655 LINK test_dma 00:03:08.655 LINK spdk_nvme_identify 00:03:08.655 LINK nvme_fuzz 00:03:08.655 LINK spdk_nvme 00:03:08.655 LINK spdk_bdev 00:03:08.655 LINK mem_callbacks 00:03:08.655 LINK spdk_nvme_perf 00:03:08.655 LINK vhost_fuzz 00:03:08.655 LINK llvm_vfio_fuzz 00:03:08.655 LINK spdk_top 00:03:08.913 LINK llvm_nvme_fuzz 00:03:08.913 CC examples/idxd/perf/perf.o 00:03:08.913 CC examples/vmd/led/led.o 00:03:08.913 CC examples/vmd/lsvmd/lsvmd.o 00:03:08.913 CC examples/sock/hello_world/hello_sock.o 00:03:08.913 CC examples/thread/thread/thread_ex.o 00:03:08.913 CC app/vhost/vhost.o 00:03:08.913 LINK lsvmd 00:03:09.171 LINK led 00:03:09.171 LINK memory_ut 00:03:09.171 LINK hello_sock 00:03:09.171 LINK idxd_perf 00:03:09.171 LINK thread 00:03:09.171 LINK vhost 00:03:09.171 LINK spdk_lock 00:03:09.428 LINK iscsi_fuzz 00:03:09.686 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:09.686 CC examples/nvme/arbitration/arbitration.o 00:03:09.686 CC examples/nvme/hello_world/hello_world.o 00:03:09.686 CC examples/nvme/reconnect/reconnect.o 00:03:09.686 CC examples/nvme/hotplug/hotplug.o 00:03:09.686 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:09.944 CC examples/nvme/abort/abort.o 00:03:09.944 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:09.944 CC test/event/event_perf/event_perf.o 00:03:09.944 CC test/event/reactor/reactor.o 00:03:09.944 CC test/event/reactor_perf/reactor_perf.o 00:03:09.944 CC test/event/app_repeat/app_repeat.o 00:03:09.944 CC test/event/scheduler/scheduler.o 00:03:09.944 LINK event_perf 00:03:09.944 LINK reactor 00:03:09.944 LINK reactor_perf 00:03:09.944 LINK hello_world 00:03:09.944 LINK cmb_copy 00:03:09.944 LINK pmr_persistence 00:03:09.944 LINK hotplug 00:03:09.944 LINK app_repeat 00:03:09.944 LINK reconnect 00:03:09.944 LINK abort 00:03:10.202 LINK arbitration 00:03:10.202 LINK scheduler 00:03:10.202 LINK nvme_manage 00:03:10.202 CC test/nvme/startup/startup.o 00:03:10.202 CC test/nvme/connect_stress/connect_stress.o 00:03:10.202 CC test/nvme/sgl/sgl.o 00:03:10.202 CC test/nvme/e2edp/nvme_dp.o 00:03:10.202 CC test/nvme/reserve/reserve.o 00:03:10.202 CC test/nvme/aer/aer.o 00:03:10.202 CC test/nvme/simple_copy/simple_copy.o 00:03:10.202 CC test/nvme/err_injection/err_injection.o 00:03:10.202 CC test/nvme/compliance/nvme_compliance.o 00:03:10.203 CC test/nvme/boot_partition/boot_partition.o 00:03:10.203 CC test/nvme/overhead/overhead.o 00:03:10.203 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:10.203 CC test/nvme/fused_ordering/fused_ordering.o 00:03:10.203 CC test/blobfs/mkfs/mkfs.o 00:03:10.203 CC test/nvme/fdp/fdp.o 00:03:10.203 CC test/nvme/reset/reset.o 00:03:10.203 CC test/nvme/cuse/cuse.o 00:03:10.203 CC test/accel/dif/dif.o 00:03:10.203 CC test/lvol/esnap/esnap.o 00:03:10.203 LINK startup 00:03:10.203 LINK connect_stress 00:03:10.203 LINK reserve 00:03:10.203 LINK boot_partition 00:03:10.462 LINK err_injection 00:03:10.462 LINK doorbell_aers 00:03:10.462 LINK simple_copy 00:03:10.462 LINK fused_ordering 00:03:10.462 LINK nvme_dp 00:03:10.462 LINK sgl 00:03:10.462 LINK mkfs 00:03:10.462 LINK aer 00:03:10.462 LINK overhead 00:03:10.462 LINK reset 00:03:10.462 LINK fdp 00:03:10.462 LINK nvme_compliance 00:03:10.719 LINK dif 00:03:10.719 CC examples/accel/perf/accel_perf.o 00:03:10.719 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:10.719 CC examples/blob/hello_world/hello_blob.o 00:03:10.719 CC examples/blob/cli/blobcli.o 00:03:10.977 LINK hello_fsdev 00:03:10.977 LINK hello_blob 00:03:10.977 LINK accel_perf 00:03:11.236 LINK blobcli 00:03:11.236 LINK cuse 00:03:11.805 CC examples/bdev/hello_world/hello_bdev.o 00:03:11.805 CC examples/bdev/bdevperf/bdevperf.o 00:03:12.063 LINK hello_bdev 00:03:12.322 CC test/bdev/bdevio/bdevio.o 00:03:12.322 LINK bdevperf 00:03:12.581 LINK bdevio 00:03:13.559 LINK esnap 00:03:13.819 CC examples/nvmf/nvmf/nvmf.o 00:03:14.079 LINK nvmf 00:03:15.458 00:03:15.458 real 0m46.680s 00:03:15.458 user 6m19.077s 00:03:15.458 sys 2m33.067s 00:03:15.458 06:41:22 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:15.458 06:41:22 make -- common/autotest_common.sh@10 -- $ set +x 00:03:15.458 ************************************ 00:03:15.458 END TEST make 00:03:15.458 ************************************ 00:03:15.458 06:41:22 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:15.458 06:41:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:15.458 06:41:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:15.458 06:41:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.458 06:41:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:15.458 06:41:22 -- pm/common@44 -- $ pid=1026459 00:03:15.458 06:41:22 -- pm/common@50 -- $ kill -TERM 1026459 00:03:15.458 06:41:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.458 06:41:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:15.458 06:41:22 -- pm/common@44 -- $ pid=1026461 00:03:15.458 06:41:22 -- pm/common@50 -- $ kill -TERM 1026461 00:03:15.458 06:41:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.458 06:41:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:15.458 06:41:22 -- pm/common@44 -- $ pid=1026463 00:03:15.458 06:41:22 -- pm/common@50 -- $ kill -TERM 1026463 00:03:15.458 06:41:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.458 06:41:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:15.458 06:41:22 -- pm/common@44 -- $ pid=1026483 00:03:15.458 06:41:22 -- pm/common@50 -- $ sudo -E kill -TERM 1026483 00:03:15.458 06:41:22 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:15.458 06:41:22 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:03:15.458 06:41:22 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:15.458 06:41:22 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:15.458 06:41:22 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:15.458 06:41:22 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:15.458 06:41:22 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:15.458 06:41:22 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:15.458 06:41:22 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:15.458 06:41:22 -- scripts/common.sh@336 -- # IFS=.-: 00:03:15.458 06:41:22 -- scripts/common.sh@336 -- # read -ra ver1 00:03:15.458 06:41:22 -- scripts/common.sh@337 -- # IFS=.-: 00:03:15.458 06:41:22 -- scripts/common.sh@337 -- # read -ra ver2 00:03:15.458 06:41:22 -- scripts/common.sh@338 -- # local 'op=<' 00:03:15.458 06:41:22 -- scripts/common.sh@340 -- # ver1_l=2 00:03:15.458 06:41:22 -- scripts/common.sh@341 -- # ver2_l=1 00:03:15.458 06:41:22 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:15.458 06:41:22 -- scripts/common.sh@344 -- # case "$op" in 00:03:15.458 06:41:22 -- scripts/common.sh@345 -- # : 1 00:03:15.458 06:41:22 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:15.458 06:41:22 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:15.458 06:41:22 -- scripts/common.sh@365 -- # decimal 1 00:03:15.458 06:41:22 -- scripts/common.sh@353 -- # local d=1 00:03:15.458 06:41:22 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:15.458 06:41:22 -- scripts/common.sh@355 -- # echo 1 00:03:15.458 06:41:22 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:15.458 06:41:22 -- scripts/common.sh@366 -- # decimal 2 00:03:15.458 06:41:22 -- scripts/common.sh@353 -- # local d=2 00:03:15.458 06:41:22 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:15.458 06:41:22 -- scripts/common.sh@355 -- # echo 2 00:03:15.458 06:41:22 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:15.458 06:41:22 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:15.458 06:41:22 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:15.458 06:41:22 -- scripts/common.sh@368 -- # return 0 00:03:15.458 06:41:22 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:15.458 06:41:22 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:15.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.458 --rc genhtml_branch_coverage=1 00:03:15.458 --rc genhtml_function_coverage=1 00:03:15.458 --rc genhtml_legend=1 00:03:15.458 --rc geninfo_all_blocks=1 00:03:15.458 --rc geninfo_unexecuted_blocks=1 00:03:15.458 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.458 ' 00:03:15.458 06:41:22 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:15.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.458 --rc genhtml_branch_coverage=1 00:03:15.458 --rc genhtml_function_coverage=1 00:03:15.458 --rc genhtml_legend=1 00:03:15.458 --rc geninfo_all_blocks=1 00:03:15.458 --rc geninfo_unexecuted_blocks=1 00:03:15.458 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.458 ' 00:03:15.458 06:41:22 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:15.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.458 --rc genhtml_branch_coverage=1 00:03:15.458 --rc genhtml_function_coverage=1 00:03:15.458 --rc genhtml_legend=1 00:03:15.458 --rc geninfo_all_blocks=1 00:03:15.458 --rc geninfo_unexecuted_blocks=1 00:03:15.458 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.458 ' 00:03:15.458 06:41:22 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:15.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.458 --rc genhtml_branch_coverage=1 00:03:15.458 --rc genhtml_function_coverage=1 00:03:15.458 --rc genhtml_legend=1 00:03:15.458 --rc geninfo_all_blocks=1 00:03:15.458 --rc geninfo_unexecuted_blocks=1 00:03:15.458 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.458 ' 00:03:15.458 06:41:22 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:03:15.458 06:41:22 -- nvmf/common.sh@7 -- # uname -s 00:03:15.458 06:41:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:15.458 06:41:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:15.458 06:41:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:15.458 06:41:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:15.458 06:41:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:15.458 06:41:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:15.458 06:41:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:15.458 06:41:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:15.458 06:41:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:15.458 06:41:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:15.458 06:41:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:03:15.458 06:41:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:03:15.458 06:41:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:15.458 06:41:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:15.458 06:41:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:15.458 06:41:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:15.458 06:41:22 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:15.458 06:41:22 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:15.458 06:41:22 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:15.458 06:41:22 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:15.458 06:41:22 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:15.458 06:41:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.458 06:41:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.458 06:41:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.458 06:41:22 -- paths/export.sh@5 -- # export PATH 00:03:15.458 06:41:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.458 06:41:22 -- nvmf/common.sh@51 -- # : 0 00:03:15.459 06:41:22 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:15.459 06:41:22 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:15.459 06:41:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:15.459 06:41:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:15.459 06:41:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:15.459 06:41:22 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:15.459 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:15.459 06:41:22 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:15.459 06:41:22 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:15.459 06:41:22 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:15.459 06:41:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:15.459 06:41:22 -- spdk/autotest.sh@32 -- # uname -s 00:03:15.459 06:41:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:15.459 06:41:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:15.459 06:41:22 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:15.459 06:41:22 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:15.459 06:41:22 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:15.459 06:41:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:15.717 06:41:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:15.717 06:41:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:15.717 06:41:22 -- spdk/autotest.sh@48 -- # udevadm_pid=1091028 00:03:15.717 06:41:22 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:15.717 06:41:22 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:15.718 06:41:22 -- pm/common@17 -- # local monitor 00:03:15.718 06:41:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.718 06:41:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.718 06:41:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.718 06:41:22 -- pm/common@21 -- # date +%s 00:03:15.718 06:41:22 -- pm/common@21 -- # date +%s 00:03:15.718 06:41:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.718 06:41:22 -- pm/common@25 -- # sleep 1 00:03:15.718 06:41:22 -- pm/common@21 -- # date +%s 00:03:15.718 06:41:22 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733982082 00:03:15.718 06:41:22 -- pm/common@21 -- # date +%s 00:03:15.718 06:41:22 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733982082 00:03:15.718 06:41:22 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733982082 00:03:15.718 06:41:22 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733982082 00:03:15.718 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733982082_collect-vmstat.pm.log 00:03:15.718 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733982082_collect-cpu-load.pm.log 00:03:15.718 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733982082_collect-bmc-pm.bmc.pm.log 00:03:15.718 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733982082_collect-cpu-temp.pm.log 00:03:16.655 06:41:23 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:16.655 06:41:23 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:16.655 06:41:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:16.655 06:41:23 -- common/autotest_common.sh@10 -- # set +x 00:03:16.655 06:41:24 -- spdk/autotest.sh@59 -- # create_test_list 00:03:16.655 06:41:24 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:16.655 06:41:24 -- common/autotest_common.sh@10 -- # set +x 00:03:16.655 06:41:24 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:03:16.655 06:41:24 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:16.655 06:41:24 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:16.655 06:41:24 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:03:16.655 06:41:24 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:16.655 06:41:24 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:16.655 06:41:24 -- common/autotest_common.sh@1457 -- # uname 00:03:16.655 06:41:24 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:16.655 06:41:24 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:16.655 06:41:24 -- common/autotest_common.sh@1477 -- # uname 00:03:16.655 06:41:24 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:16.655 06:41:24 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:16.655 06:41:24 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:03:16.655 lcov: LCOV version 1.15 00:03:16.655 06:41:24 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:03:21.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:03:27.208 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:32.481 06:41:39 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:32.481 06:41:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:32.481 06:41:39 -- common/autotest_common.sh@10 -- # set +x 00:03:32.481 06:41:39 -- spdk/autotest.sh@78 -- # rm -f 00:03:32.481 06:41:39 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:35.774 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:35.774 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:35.774 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:35.774 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:35.774 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:35.774 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:35.774 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:35.774 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:35.774 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:35.774 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:35.774 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:35.774 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:36.033 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:36.033 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:36.033 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:36.033 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:36.033 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:36.033 06:41:43 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:36.033 06:41:43 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:36.033 06:41:43 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:36.033 06:41:43 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:36.033 06:41:43 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:36.033 06:41:43 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:36.033 06:41:43 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:36.033 06:41:43 -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:03:36.033 06:41:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:36.033 06:41:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:36.033 06:41:43 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:36.033 06:41:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:36.033 06:41:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:36.033 06:41:43 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:36.033 06:41:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:36.033 06:41:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:36.033 06:41:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:36.033 06:41:43 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:36.033 06:41:43 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:36.033 No valid GPT data, bailing 00:03:36.033 06:41:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:36.033 06:41:43 -- scripts/common.sh@394 -- # pt= 00:03:36.033 06:41:43 -- scripts/common.sh@395 -- # return 1 00:03:36.033 06:41:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:36.033 1+0 records in 00:03:36.033 1+0 records out 00:03:36.033 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00611918 s, 171 MB/s 00:03:36.033 06:41:43 -- spdk/autotest.sh@105 -- # sync 00:03:36.033 06:41:43 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:36.033 06:41:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:36.033 06:41:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:44.155 06:41:50 -- spdk/autotest.sh@111 -- # uname -s 00:03:44.155 06:41:50 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:44.155 06:41:50 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:03:44.155 06:41:50 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:44.155 06:41:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.156 06:41:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.156 06:41:50 -- common/autotest_common.sh@10 -- # set +x 00:03:44.156 ************************************ 00:03:44.156 START TEST setup.sh 00:03:44.156 ************************************ 00:03:44.156 06:41:50 setup.sh -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:44.156 * Looking for test storage... 00:03:44.156 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:44.156 06:41:51 setup.sh -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:44.156 06:41:51 setup.sh -- common/autotest_common.sh@1711 -- # lcov --version 00:03:44.156 06:41:51 setup.sh -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:44.156 06:41:51 setup.sh -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@345 -- # : 1 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@353 -- # local d=1 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@355 -- # echo 1 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@353 -- # local d=2 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@355 -- # echo 2 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.156 06:41:51 setup.sh -- scripts/common.sh@368 -- # return 0 00:03:44.156 06:41:51 setup.sh -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.156 06:41:51 setup.sh -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:44.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.156 --rc genhtml_branch_coverage=1 00:03:44.156 --rc genhtml_function_coverage=1 00:03:44.156 --rc genhtml_legend=1 00:03:44.156 --rc geninfo_all_blocks=1 00:03:44.156 --rc geninfo_unexecuted_blocks=1 00:03:44.156 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:44.156 ' 00:03:44.156 06:41:51 setup.sh -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:44.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.156 --rc genhtml_branch_coverage=1 00:03:44.156 --rc genhtml_function_coverage=1 00:03:44.156 --rc genhtml_legend=1 00:03:44.156 --rc geninfo_all_blocks=1 00:03:44.156 --rc geninfo_unexecuted_blocks=1 00:03:44.156 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:44.156 ' 00:03:44.156 06:41:51 setup.sh -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:44.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.156 --rc genhtml_branch_coverage=1 00:03:44.156 --rc genhtml_function_coverage=1 00:03:44.156 --rc genhtml_legend=1 00:03:44.156 --rc geninfo_all_blocks=1 00:03:44.156 --rc geninfo_unexecuted_blocks=1 00:03:44.156 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:44.156 ' 00:03:44.156 06:41:51 setup.sh -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:44.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.156 --rc genhtml_branch_coverage=1 00:03:44.156 --rc genhtml_function_coverage=1 00:03:44.156 --rc genhtml_legend=1 00:03:44.156 --rc geninfo_all_blocks=1 00:03:44.156 --rc geninfo_unexecuted_blocks=1 00:03:44.156 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:44.156 ' 00:03:44.156 06:41:51 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:44.156 06:41:51 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:44.156 06:41:51 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:44.156 06:41:51 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.156 06:41:51 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.156 06:41:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:44.156 ************************************ 00:03:44.156 START TEST acl 00:03:44.156 ************************************ 00:03:44.156 06:41:51 setup.sh.acl -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:44.156 * Looking for test storage... 00:03:44.156 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:44.156 06:41:51 setup.sh.acl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:44.156 06:41:51 setup.sh.acl -- common/autotest_common.sh@1711 -- # lcov --version 00:03:44.156 06:41:51 setup.sh.acl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:44.156 06:41:51 setup.sh.acl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.156 06:41:51 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:03:44.156 06:41:51 setup.sh.acl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.156 06:41:51 setup.sh.acl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:44.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.156 --rc genhtml_branch_coverage=1 00:03:44.156 --rc genhtml_function_coverage=1 00:03:44.156 --rc genhtml_legend=1 00:03:44.156 --rc geninfo_all_blocks=1 00:03:44.156 --rc geninfo_unexecuted_blocks=1 00:03:44.156 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:44.156 ' 00:03:44.156 06:41:51 setup.sh.acl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:44.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.156 --rc genhtml_branch_coverage=1 00:03:44.156 --rc genhtml_function_coverage=1 00:03:44.156 --rc genhtml_legend=1 00:03:44.156 --rc geninfo_all_blocks=1 00:03:44.156 --rc geninfo_unexecuted_blocks=1 00:03:44.156 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:44.156 ' 00:03:44.156 06:41:51 setup.sh.acl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:44.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.156 --rc genhtml_branch_coverage=1 00:03:44.156 --rc genhtml_function_coverage=1 00:03:44.156 --rc genhtml_legend=1 00:03:44.156 --rc geninfo_all_blocks=1 00:03:44.156 --rc geninfo_unexecuted_blocks=1 00:03:44.156 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:44.156 ' 00:03:44.156 06:41:51 setup.sh.acl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:44.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.156 --rc genhtml_branch_coverage=1 00:03:44.156 --rc genhtml_function_coverage=1 00:03:44.156 --rc genhtml_legend=1 00:03:44.156 --rc geninfo_all_blocks=1 00:03:44.156 --rc geninfo_unexecuted_blocks=1 00:03:44.156 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:44.156 ' 00:03:44.156 06:41:51 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:44.156 06:41:51 setup.sh.acl -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:44.156 06:41:51 setup.sh.acl -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:44.156 06:41:51 setup.sh.acl -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:44.156 06:41:51 setup.sh.acl -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:44.156 06:41:51 setup.sh.acl -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:44.156 06:41:51 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:44.157 06:41:51 setup.sh.acl -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:03:44.157 06:41:51 setup.sh.acl -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:44.157 06:41:51 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:44.157 06:41:51 setup.sh.acl -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:44.157 06:41:51 setup.sh.acl -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:44.157 06:41:51 setup.sh.acl -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:44.157 06:41:51 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:44.157 06:41:51 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:44.157 06:41:51 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:44.157 06:41:51 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:44.157 06:41:51 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:44.157 06:41:51 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.157 06:41:51 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:47.444 06:41:54 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:47.444 06:41:54 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:47.444 06:41:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.444 06:41:54 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:47.444 06:41:54 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.444 06:41:54 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:49.978 Hugepages 00:03:49.978 node hugesize free / total 00:03:49.978 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:49.978 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:49.978 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.978 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:49.978 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:49.978 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.978 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:49.978 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:49.978 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.978 00:03:49.978 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:49.978 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:49.978 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:49.978 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.978 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:49.978 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.978 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.978 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:50.239 06:41:57 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:50.239 06:41:57 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.239 06:41:57 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.239 06:41:57 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:50.239 ************************************ 00:03:50.239 START TEST denied 00:03:50.239 ************************************ 00:03:50.239 06:41:57 setup.sh.acl.denied -- common/autotest_common.sh@1129 -- # denied 00:03:50.239 06:41:57 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:50.239 06:41:57 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:50.239 06:41:57 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:50.239 06:41:57 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.239 06:41:57 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:54.494 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:54.494 06:42:01 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:54.494 06:42:01 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:54.494 06:42:01 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:54.494 06:42:01 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:54.494 06:42:01 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:54.494 06:42:01 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:54.494 06:42:01 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:54.494 06:42:01 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:54.494 06:42:01 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.494 06:42:01 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.684 00:03:58.684 real 0m7.854s 00:03:58.684 user 0m2.433s 00:03:58.684 sys 0m4.699s 00:03:58.684 06:42:05 setup.sh.acl.denied -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.684 06:42:05 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:58.684 ************************************ 00:03:58.684 END TEST denied 00:03:58.684 ************************************ 00:03:58.684 06:42:05 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:58.684 06:42:05 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.684 06:42:05 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.684 06:42:05 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:58.684 ************************************ 00:03:58.684 START TEST allowed 00:03:58.684 ************************************ 00:03:58.684 06:42:05 setup.sh.acl.allowed -- common/autotest_common.sh@1129 -- # allowed 00:03:58.684 06:42:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:58.684 06:42:05 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:58.684 06:42:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:58.684 06:42:05 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.684 06:42:05 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:02.877 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:02.877 06:42:10 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:02.877 06:42:10 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:02.877 06:42:10 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:02.877 06:42:10 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:02.877 06:42:10 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:07.073 00:04:07.073 real 0m8.278s 00:04:07.073 user 0m2.182s 00:04:07.073 sys 0m4.601s 00:04:07.073 06:42:13 setup.sh.acl.allowed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.073 06:42:13 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:07.073 ************************************ 00:04:07.073 END TEST allowed 00:04:07.073 ************************************ 00:04:07.073 00:04:07.073 real 0m22.772s 00:04:07.073 user 0m6.835s 00:04:07.073 sys 0m13.781s 00:04:07.073 06:42:13 setup.sh.acl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.073 06:42:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:07.073 ************************************ 00:04:07.073 END TEST acl 00:04:07.073 ************************************ 00:04:07.073 06:42:14 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:07.073 06:42:14 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.073 06:42:14 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.073 06:42:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:07.073 ************************************ 00:04:07.073 START TEST hugepages 00:04:07.073 ************************************ 00:04:07.073 06:42:14 setup.sh.hugepages -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:07.073 * Looking for test storage... 00:04:07.073 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:07.073 06:42:14 setup.sh.hugepages -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:07.073 06:42:14 setup.sh.hugepages -- common/autotest_common.sh@1711 -- # lcov --version 00:04:07.073 06:42:14 setup.sh.hugepages -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:07.073 06:42:14 setup.sh.hugepages -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:07.073 06:42:14 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.073 06:42:14 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.073 06:42:14 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.073 06:42:14 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.073 06:42:14 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.073 06:42:14 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.073 06:42:14 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.073 06:42:14 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.073 06:42:14 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.073 06:42:14 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.073 06:42:14 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.073 06:42:14 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:04:07.073 06:42:14 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:04:07.073 06:42:14 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.073 06:42:14 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.074 06:42:14 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:04:07.074 06:42:14 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:04:07.074 06:42:14 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:04:07.074 06:42:14 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.074 06:42:14 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:04:07.074 06:42:14 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:04:07.074 06:42:14 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:04:07.074 06:42:14 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.074 06:42:14 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.074 06:42:14 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.074 06:42:14 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:04:07.074 06:42:14 setup.sh.hugepages -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.074 06:42:14 setup.sh.hugepages -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:07.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.074 --rc genhtml_branch_coverage=1 00:04:07.074 --rc genhtml_function_coverage=1 00:04:07.074 --rc genhtml_legend=1 00:04:07.074 --rc geninfo_all_blocks=1 00:04:07.074 --rc geninfo_unexecuted_blocks=1 00:04:07.074 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:07.074 ' 00:04:07.074 06:42:14 setup.sh.hugepages -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:07.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.074 --rc genhtml_branch_coverage=1 00:04:07.074 --rc genhtml_function_coverage=1 00:04:07.074 --rc genhtml_legend=1 00:04:07.074 --rc geninfo_all_blocks=1 00:04:07.074 --rc geninfo_unexecuted_blocks=1 00:04:07.074 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:07.074 ' 00:04:07.074 06:42:14 setup.sh.hugepages -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:07.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.074 --rc genhtml_branch_coverage=1 00:04:07.074 --rc genhtml_function_coverage=1 00:04:07.074 --rc genhtml_legend=1 00:04:07.074 --rc geninfo_all_blocks=1 00:04:07.074 --rc geninfo_unexecuted_blocks=1 00:04:07.074 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:07.074 ' 00:04:07.074 06:42:14 setup.sh.hugepages -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:07.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.074 --rc genhtml_branch_coverage=1 00:04:07.074 --rc genhtml_function_coverage=1 00:04:07.074 --rc genhtml_legend=1 00:04:07.074 --rc geninfo_all_blocks=1 00:04:07.074 --rc geninfo_unexecuted_blocks=1 00:04:07.074 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:07.074 ' 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 40362880 kB' 'MemAvailable: 44084012 kB' 'Buffers: 9316 kB' 'Cached: 11561136 kB' 'SwapCached: 0 kB' 'Active: 8380832 kB' 'Inactive: 3689028 kB' 'Active(anon): 7963860 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502744 kB' 'Mapped: 164036 kB' 'Shmem: 7464452 kB' 'KReclaimable: 223996 kB' 'Slab: 915936 kB' 'SReclaimable: 223996 kB' 'SUnreclaim: 691940 kB' 'KernelStack: 21840 kB' 'PageTables: 7992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36433332 kB' 'Committed_AS: 9178364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214224 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.074 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.075 06:42:14 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:07.076 06:42:14 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:04:07.076 06:42:14 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.076 06:42:14 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.076 06:42:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.076 ************************************ 00:04:07.076 START TEST single_node_setup 00:04:07.076 ************************************ 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1129 -- # single_node_setup 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.076 06:42:14 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:10.369 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:10.369 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:10.369 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:10.369 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:10.369 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:10.369 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:10.369 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:10.369 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:10.369 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:10.369 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:10.369 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:10.369 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:10.369 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:10.369 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:10.369 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:10.369 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:11.755 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:11.755 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:04:11.755 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:04:11.755 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:04:11.755 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:04:11.755 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:04:11.755 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:04:11.755 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:04:11.755 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.755 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:11.755 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.755 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:11.755 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:11.755 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42570036 kB' 'MemAvailable: 46290624 kB' 'Buffers: 9316 kB' 'Cached: 11561272 kB' 'SwapCached: 0 kB' 'Active: 8384252 kB' 'Inactive: 3689028 kB' 'Active(anon): 7967280 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506768 kB' 'Mapped: 165400 kB' 'Shmem: 7464588 kB' 'KReclaimable: 222908 kB' 'Slab: 913140 kB' 'SReclaimable: 222908 kB' 'SUnreclaim: 690232 kB' 'KernelStack: 21984 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 9180976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214320 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.756 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42566640 kB' 'MemAvailable: 46287228 kB' 'Buffers: 9316 kB' 'Cached: 11561276 kB' 'SwapCached: 0 kB' 'Active: 8387508 kB' 'Inactive: 3689028 kB' 'Active(anon): 7970536 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509716 kB' 'Mapped: 164780 kB' 'Shmem: 7464592 kB' 'KReclaimable: 222908 kB' 'Slab: 913104 kB' 'SReclaimable: 222908 kB' 'SUnreclaim: 690196 kB' 'KernelStack: 21984 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 9183728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214276 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.757 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.758 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42565404 kB' 'MemAvailable: 46285992 kB' 'Buffers: 9316 kB' 'Cached: 11561276 kB' 'SwapCached: 0 kB' 'Active: 8382456 kB' 'Inactive: 3689028 kB' 'Active(anon): 7965484 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504116 kB' 'Mapped: 164508 kB' 'Shmem: 7464592 kB' 'KReclaimable: 222908 kB' 'Slab: 913104 kB' 'SReclaimable: 222908 kB' 'SUnreclaim: 690196 kB' 'KernelStack: 21920 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 9179136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214352 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.759 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.760 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:11.761 nr_hugepages=1024 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:11.761 resv_hugepages=0 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:11.761 surplus_hugepages=0 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:11.761 anon_hugepages=0 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42565264 kB' 'MemAvailable: 46285852 kB' 'Buffers: 9316 kB' 'Cached: 11561276 kB' 'SwapCached: 0 kB' 'Active: 8382588 kB' 'Inactive: 3689028 kB' 'Active(anon): 7965616 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503756 kB' 'Mapped: 164096 kB' 'Shmem: 7464592 kB' 'KReclaimable: 222908 kB' 'Slab: 913096 kB' 'SReclaimable: 222908 kB' 'SUnreclaim: 690188 kB' 'KernelStack: 22032 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 9179156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214320 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.761 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.762 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26187616 kB' 'MemUsed: 6397752 kB' 'SwapCached: 0 kB' 'Active: 2731644 kB' 'Inactive: 88372 kB' 'Active(anon): 2493092 kB' 'Inactive(anon): 0 kB' 'Active(file): 238552 kB' 'Inactive(file): 88372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2601056 kB' 'Mapped: 93360 kB' 'AnonPages: 222036 kB' 'Shmem: 2274132 kB' 'KernelStack: 12744 kB' 'PageTables: 4836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125300 kB' 'Slab: 430712 kB' 'SReclaimable: 125300 kB' 'SUnreclaim: 305412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.763 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:11.764 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:11.765 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:11.765 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:11.765 node0=1024 expecting 1024 00:04:11.765 06:42:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:11.765 00:04:11.765 real 0m4.943s 00:04:11.765 user 0m1.196s 00:04:11.765 sys 0m2.169s 00:04:11.765 06:42:19 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.765 06:42:19 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:04:11.765 ************************************ 00:04:11.765 END TEST single_node_setup 00:04:11.765 ************************************ 00:04:12.024 06:42:19 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:04:12.024 06:42:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.024 06:42:19 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.024 06:42:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:12.025 ************************************ 00:04:12.025 START TEST even_2G_alloc 00:04:12.025 ************************************ 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1129 -- # even_2G_alloc 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.025 06:42:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:14.564 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:14.564 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:14.564 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:14.564 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:14.564 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:14.564 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:14.564 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:14.564 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:14.564 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:14.564 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:14.564 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:14.564 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:14.564 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:14.564 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:14.564 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:14.564 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:14.564 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42583472 kB' 'MemAvailable: 46304076 kB' 'Buffers: 9316 kB' 'Cached: 11561432 kB' 'SwapCached: 0 kB' 'Active: 8383124 kB' 'Inactive: 3689028 kB' 'Active(anon): 7966152 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504676 kB' 'Mapped: 164220 kB' 'Shmem: 7464748 kB' 'KReclaimable: 222940 kB' 'Slab: 913456 kB' 'SReclaimable: 222940 kB' 'SUnreclaim: 690516 kB' 'KernelStack: 21968 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 9180144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214432 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.829 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.830 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42586212 kB' 'MemAvailable: 46306816 kB' 'Buffers: 9316 kB' 'Cached: 11561436 kB' 'SwapCached: 0 kB' 'Active: 8382612 kB' 'Inactive: 3689028 kB' 'Active(anon): 7965640 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504176 kB' 'Mapped: 164168 kB' 'Shmem: 7464752 kB' 'KReclaimable: 222940 kB' 'Slab: 913396 kB' 'SReclaimable: 222940 kB' 'SUnreclaim: 690456 kB' 'KernelStack: 21808 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 9180016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214384 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.831 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42585240 kB' 'MemAvailable: 46305844 kB' 'Buffers: 9316 kB' 'Cached: 11561452 kB' 'SwapCached: 0 kB' 'Active: 8382400 kB' 'Inactive: 3689028 kB' 'Active(anon): 7965428 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503864 kB' 'Mapped: 164108 kB' 'Shmem: 7464768 kB' 'KReclaimable: 222940 kB' 'Slab: 913524 kB' 'SReclaimable: 222940 kB' 'SUnreclaim: 690584 kB' 'KernelStack: 22016 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 9178532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214416 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.832 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.833 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:14.834 nr_hugepages=1024 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:14.834 resv_hugepages=0 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:14.834 surplus_hugepages=0 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:14.834 anon_hugepages=0 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.834 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42584980 kB' 'MemAvailable: 46305584 kB' 'Buffers: 9316 kB' 'Cached: 11561476 kB' 'SwapCached: 0 kB' 'Active: 8382172 kB' 'Inactive: 3689028 kB' 'Active(anon): 7965200 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503660 kB' 'Mapped: 164108 kB' 'Shmem: 7464792 kB' 'KReclaimable: 222940 kB' 'Slab: 913524 kB' 'SReclaimable: 222940 kB' 'SUnreclaim: 690584 kB' 'KernelStack: 21856 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 9180060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214416 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.835 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27256584 kB' 'MemUsed: 5328784 kB' 'SwapCached: 0 kB' 'Active: 2731276 kB' 'Inactive: 88372 kB' 'Active(anon): 2492724 kB' 'Inactive(anon): 0 kB' 'Active(file): 238552 kB' 'Inactive(file): 88372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2601068 kB' 'Mapped: 93372 kB' 'AnonPages: 221688 kB' 'Shmem: 2274144 kB' 'KernelStack: 12776 kB' 'PageTables: 4772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125332 kB' 'Slab: 431144 kB' 'SReclaimable: 125332 kB' 'SUnreclaim: 305812 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.836 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.837 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.838 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.838 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.838 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:14.838 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:14.838 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:14.838 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:14.838 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.838 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:14.838 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.838 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.838 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.838 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:14.838 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:14.838 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.838 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698392 kB' 'MemFree: 15327356 kB' 'MemUsed: 12371036 kB' 'SwapCached: 0 kB' 'Active: 5651148 kB' 'Inactive: 3600656 kB' 'Active(anon): 5472728 kB' 'Inactive(anon): 0 kB' 'Active(file): 178420 kB' 'Inactive(file): 3600656 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8969764 kB' 'Mapped: 70736 kB' 'AnonPages: 282132 kB' 'Shmem: 5190688 kB' 'KernelStack: 9192 kB' 'PageTables: 3356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97608 kB' 'Slab: 482380 kB' 'SReclaimable: 97608 kB' 'SUnreclaim: 384772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.098 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:15.099 node0=512 expecting 512 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:15.099 node1=512 expecting 512 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:04:15.099 00:04:15.099 real 0m3.024s 00:04:15.099 user 0m1.022s 00:04:15.099 sys 0m1.857s 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.099 06:42:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:15.099 ************************************ 00:04:15.099 END TEST even_2G_alloc 00:04:15.099 ************************************ 00:04:15.099 06:42:22 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:04:15.099 06:42:22 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.099 06:42:22 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.099 06:42:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:15.099 ************************************ 00:04:15.099 START TEST odd_alloc 00:04:15.099 ************************************ 00:04:15.099 06:42:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1129 -- # odd_alloc 00:04:15.099 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:04:15.099 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:04:15.099 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:15.099 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:15.099 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:04:15.099 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:15.099 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:15.100 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:15.100 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:04:15.100 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:15.100 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:15.100 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:15.100 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:15.100 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:15.100 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:15.100 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:15.100 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:04:15.100 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:15.100 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:15.100 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:04:15.100 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:15.100 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:15.100 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:15.100 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:04:15.100 06:42:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:04:15.100 06:42:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.100 06:42:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:18.398 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:18.398 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:18.398 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:18.398 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:18.398 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:18.398 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:18.398 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:18.398 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:18.398 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:18.398 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:18.398 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:18.398 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:18.398 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:18.398 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:18.398 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:18.398 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:18.398 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42611928 kB' 'MemAvailable: 46332516 kB' 'Buffers: 9316 kB' 'Cached: 11561604 kB' 'SwapCached: 0 kB' 'Active: 8381900 kB' 'Inactive: 3689028 kB' 'Active(anon): 7964928 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503280 kB' 'Mapped: 163196 kB' 'Shmem: 7464920 kB' 'KReclaimable: 222908 kB' 'Slab: 913052 kB' 'SReclaimable: 222908 kB' 'SUnreclaim: 690144 kB' 'KernelStack: 21840 kB' 'PageTables: 7736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480884 kB' 'Committed_AS: 9170616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214448 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.398 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.399 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42612036 kB' 'MemAvailable: 46332624 kB' 'Buffers: 9316 kB' 'Cached: 11561608 kB' 'SwapCached: 0 kB' 'Active: 8381796 kB' 'Inactive: 3689028 kB' 'Active(anon): 7964824 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503176 kB' 'Mapped: 163092 kB' 'Shmem: 7464924 kB' 'KReclaimable: 222908 kB' 'Slab: 913052 kB' 'SReclaimable: 222908 kB' 'SUnreclaim: 690144 kB' 'KernelStack: 21824 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480884 kB' 'Committed_AS: 9170632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214416 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.400 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.401 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42614328 kB' 'MemAvailable: 46334916 kB' 'Buffers: 9316 kB' 'Cached: 11561624 kB' 'SwapCached: 0 kB' 'Active: 8381772 kB' 'Inactive: 3689028 kB' 'Active(anon): 7964800 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503168 kB' 'Mapped: 163092 kB' 'Shmem: 7464940 kB' 'KReclaimable: 222908 kB' 'Slab: 913044 kB' 'SReclaimable: 222908 kB' 'SUnreclaim: 690136 kB' 'KernelStack: 21824 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480884 kB' 'Committed_AS: 9170656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214416 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.402 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:04:18.403 nr_hugepages=1025 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:18.403 resv_hugepages=0 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:18.403 surplus_hugepages=0 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:18.403 anon_hugepages=0 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.403 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42611376 kB' 'MemAvailable: 46331964 kB' 'Buffers: 9316 kB' 'Cached: 11561644 kB' 'SwapCached: 0 kB' 'Active: 8381748 kB' 'Inactive: 3689028 kB' 'Active(anon): 7964776 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503128 kB' 'Mapped: 163092 kB' 'Shmem: 7464960 kB' 'KReclaimable: 222908 kB' 'Slab: 913044 kB' 'SReclaimable: 222908 kB' 'SUnreclaim: 690136 kB' 'KernelStack: 21808 kB' 'PageTables: 7604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480884 kB' 'Committed_AS: 9170676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214416 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.404 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27265580 kB' 'MemUsed: 5319788 kB' 'SwapCached: 0 kB' 'Active: 2733908 kB' 'Inactive: 88372 kB' 'Active(anon): 2495356 kB' 'Inactive(anon): 0 kB' 'Active(file): 238552 kB' 'Inactive(file): 88372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2601080 kB' 'Mapped: 93060 kB' 'AnonPages: 224412 kB' 'Shmem: 2274156 kB' 'KernelStack: 12648 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125332 kB' 'Slab: 430976 kB' 'SReclaimable: 125332 kB' 'SUnreclaim: 305644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.405 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.406 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698392 kB' 'MemFree: 15342876 kB' 'MemUsed: 12355516 kB' 'SwapCached: 0 kB' 'Active: 5650888 kB' 'Inactive: 3600656 kB' 'Active(anon): 5472468 kB' 'Inactive(anon): 0 kB' 'Active(file): 178420 kB' 'Inactive(file): 3600656 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8969920 kB' 'Mapped: 70432 kB' 'AnonPages: 281700 kB' 'Shmem: 5190844 kB' 'KernelStack: 9160 kB' 'PageTables: 3124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97576 kB' 'Slab: 482068 kB' 'SReclaimable: 97576 kB' 'SUnreclaim: 384492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.407 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:04:18.408 node0=513 expecting 513 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:18.408 node1=512 expecting 512 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:18.408 00:04:18.408 real 0m3.463s 00:04:18.408 user 0m1.279s 00:04:18.408 sys 0m2.225s 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.408 06:42:25 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:18.408 ************************************ 00:04:18.408 END TEST odd_alloc 00:04:18.408 ************************************ 00:04:18.668 06:42:25 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:04:18.668 06:42:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.668 06:42:25 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.668 06:42:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:18.668 ************************************ 00:04:18.668 START TEST custom_alloc 00:04:18.668 ************************************ 00:04:18.668 06:42:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1129 -- # custom_alloc 00:04:18.668 06:42:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:04:18.668 06:42:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:04:18.668 06:42:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:04:18.668 06:42:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:04:18.668 06:42:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:18.668 06:42:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:04:18.668 06:42:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:04:18.668 06:42:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:18.668 06:42:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:18.668 06:42:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:04:18.668 06:42:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:18.668 06:42:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:18.668 06:42:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:18.668 06:42:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:04:18.668 06:42:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:18.668 06:42:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:18.668 06:42:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:18.668 06:42:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:18.668 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.669 06:42:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:21.962 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:21.962 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:21.962 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:21.962 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:21.962 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:21.962 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:21.962 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:21.962 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:21.962 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:21.962 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:21.962 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:21.962 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:21.962 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:21.962 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:21.962 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:21.962 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:21.962 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:21.962 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:04:21.962 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:04:21.962 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:04:21.962 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:21.962 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:21.962 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 41554260 kB' 'MemAvailable: 45274848 kB' 'Buffers: 9316 kB' 'Cached: 11561772 kB' 'SwapCached: 0 kB' 'Active: 8388448 kB' 'Inactive: 3689028 kB' 'Active(anon): 7971476 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509624 kB' 'Mapped: 163560 kB' 'Shmem: 7465088 kB' 'KReclaimable: 222908 kB' 'Slab: 913572 kB' 'SReclaimable: 222908 kB' 'SUnreclaim: 690664 kB' 'KernelStack: 21856 kB' 'PageTables: 7632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957620 kB' 'Committed_AS: 9177576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214420 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.963 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 41560080 kB' 'MemAvailable: 45280668 kB' 'Buffers: 9316 kB' 'Cached: 11561776 kB' 'SwapCached: 0 kB' 'Active: 8383208 kB' 'Inactive: 3689028 kB' 'Active(anon): 7966236 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504468 kB' 'Mapped: 163156 kB' 'Shmem: 7465092 kB' 'KReclaimable: 222908 kB' 'Slab: 913476 kB' 'SReclaimable: 222908 kB' 'SUnreclaim: 690568 kB' 'KernelStack: 21840 kB' 'PageTables: 7544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957620 kB' 'Committed_AS: 9171324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214320 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.964 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.965 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 41560308 kB' 'MemAvailable: 45280896 kB' 'Buffers: 9316 kB' 'Cached: 11561792 kB' 'SwapCached: 0 kB' 'Active: 8382516 kB' 'Inactive: 3689028 kB' 'Active(anon): 7965544 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503656 kB' 'Mapped: 163100 kB' 'Shmem: 7465108 kB' 'KReclaimable: 222908 kB' 'Slab: 913476 kB' 'SReclaimable: 222908 kB' 'SUnreclaim: 690568 kB' 'KernelStack: 21824 kB' 'PageTables: 7492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957620 kB' 'Committed_AS: 9171348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214336 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.966 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.967 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.967 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.967 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.967 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.967 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.967 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.967 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.967 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.967 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.967 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.967 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.967 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.967 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.230 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:04:22.231 nr_hugepages=1536 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:22.231 resv_hugepages=0 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:22.231 surplus_hugepages=0 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:22.231 anon_hugepages=0 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 41560612 kB' 'MemAvailable: 45281200 kB' 'Buffers: 9316 kB' 'Cached: 11561812 kB' 'SwapCached: 0 kB' 'Active: 8382844 kB' 'Inactive: 3689028 kB' 'Active(anon): 7965872 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504012 kB' 'Mapped: 163100 kB' 'Shmem: 7465128 kB' 'KReclaimable: 222908 kB' 'Slab: 913476 kB' 'SReclaimable: 222908 kB' 'SUnreclaim: 690568 kB' 'KernelStack: 21840 kB' 'PageTables: 7552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957620 kB' 'Committed_AS: 9172876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214336 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.231 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.232 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27245956 kB' 'MemUsed: 5339412 kB' 'SwapCached: 0 kB' 'Active: 2731736 kB' 'Inactive: 88372 kB' 'Active(anon): 2493184 kB' 'Inactive(anon): 0 kB' 'Active(file): 238552 kB' 'Inactive(file): 88372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2601104 kB' 'Mapped: 93068 kB' 'AnonPages: 222092 kB' 'Shmem: 2274180 kB' 'KernelStack: 12600 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125332 kB' 'Slab: 431304 kB' 'SReclaimable: 125332 kB' 'SUnreclaim: 305972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.233 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.234 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698392 kB' 'MemFree: 14314772 kB' 'MemUsed: 13383620 kB' 'SwapCached: 0 kB' 'Active: 5651328 kB' 'Inactive: 3600656 kB' 'Active(anon): 5472908 kB' 'Inactive(anon): 0 kB' 'Active(file): 178420 kB' 'Inactive(file): 3600656 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8970048 kB' 'Mapped: 70032 kB' 'AnonPages: 282092 kB' 'Shmem: 5190972 kB' 'KernelStack: 9144 kB' 'PageTables: 3064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97576 kB' 'Slab: 482172 kB' 'SReclaimable: 97576 kB' 'SUnreclaim: 384596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.235 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:22.236 node0=512 expecting 512 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:04:22.236 node1=1024 expecting 1024 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:22.236 00:04:22.236 real 0m3.602s 00:04:22.236 user 0m1.357s 00:04:22.236 sys 0m2.310s 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.236 06:42:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:22.236 ************************************ 00:04:22.236 END TEST custom_alloc 00:04:22.236 ************************************ 00:04:22.236 06:42:29 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:22.236 06:42:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.236 06:42:29 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.236 06:42:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:22.236 ************************************ 00:04:22.236 START TEST no_shrink_alloc 00:04:22.236 ************************************ 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1129 -- # no_shrink_alloc 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.236 06:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:25.531 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:25.531 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:25.531 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:25.531 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:25.531 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:25.531 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:25.531 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:25.531 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:25.531 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:25.532 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:25.532 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:25.532 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:25.532 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:25.532 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:25.532 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:25.532 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:25.532 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42593448 kB' 'MemAvailable: 46314016 kB' 'Buffers: 9316 kB' 'Cached: 11561944 kB' 'SwapCached: 0 kB' 'Active: 8384240 kB' 'Inactive: 3689028 kB' 'Active(anon): 7967268 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504784 kB' 'Mapped: 163268 kB' 'Shmem: 7465260 kB' 'KReclaimable: 222868 kB' 'Slab: 913888 kB' 'SReclaimable: 222868 kB' 'SUnreclaim: 691020 kB' 'KernelStack: 21968 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 9173128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214528 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.532 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42613700 kB' 'MemAvailable: 46334268 kB' 'Buffers: 9316 kB' 'Cached: 11561948 kB' 'SwapCached: 0 kB' 'Active: 8383392 kB' 'Inactive: 3689028 kB' 'Active(anon): 7966420 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504588 kB' 'Mapped: 163120 kB' 'Shmem: 7465264 kB' 'KReclaimable: 222868 kB' 'Slab: 913880 kB' 'SReclaimable: 222868 kB' 'SUnreclaim: 691012 kB' 'KernelStack: 22016 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 9174648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214544 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.533 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.534 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42615936 kB' 'MemAvailable: 46336504 kB' 'Buffers: 9316 kB' 'Cached: 11561948 kB' 'SwapCached: 0 kB' 'Active: 8383560 kB' 'Inactive: 3689028 kB' 'Active(anon): 7966588 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504696 kB' 'Mapped: 163120 kB' 'Shmem: 7465264 kB' 'KReclaimable: 222868 kB' 'Slab: 913880 kB' 'SReclaimable: 222868 kB' 'SUnreclaim: 691012 kB' 'KernelStack: 22032 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 9174672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214528 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.535 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.536 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:25.537 nr_hugepages=1024 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:25.537 resv_hugepages=0 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:25.537 surplus_hugepages=0 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:25.537 anon_hugepages=0 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42626532 kB' 'MemAvailable: 46347100 kB' 'Buffers: 9316 kB' 'Cached: 11561988 kB' 'SwapCached: 0 kB' 'Active: 8384120 kB' 'Inactive: 3689028 kB' 'Active(anon): 7967148 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505088 kB' 'Mapped: 163120 kB' 'Shmem: 7465304 kB' 'KReclaimable: 222868 kB' 'Slab: 913808 kB' 'SReclaimable: 222868 kB' 'SUnreclaim: 690940 kB' 'KernelStack: 22000 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 9188004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214544 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.537 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.538 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.539 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26226828 kB' 'MemUsed: 6358540 kB' 'SwapCached: 0 kB' 'Active: 2730840 kB' 'Inactive: 88372 kB' 'Active(anon): 2492288 kB' 'Inactive(anon): 0 kB' 'Active(file): 238552 kB' 'Inactive(file): 88372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2601132 kB' 'Mapped: 93080 kB' 'AnonPages: 221152 kB' 'Shmem: 2274208 kB' 'KernelStack: 12744 kB' 'PageTables: 4900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125332 kB' 'Slab: 431416 kB' 'SReclaimable: 125332 kB' 'SUnreclaim: 306084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.800 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:25.801 node0=1024 expecting 1024 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.801 06:42:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:29.097 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:29.097 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:29.097 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:29.097 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:29.097 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:29.097 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:29.097 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:29.097 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:29.097 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:29.097 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:29.097 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:29.097 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:29.097 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:29.097 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:29.097 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:29.097 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:29.097 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:29.097 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42622752 kB' 'MemAvailable: 46343320 kB' 'Buffers: 9316 kB' 'Cached: 11562100 kB' 'SwapCached: 0 kB' 'Active: 8383364 kB' 'Inactive: 3689028 kB' 'Active(anon): 7966392 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504268 kB' 'Mapped: 163220 kB' 'Shmem: 7465416 kB' 'KReclaimable: 222868 kB' 'Slab: 913848 kB' 'SReclaimable: 222868 kB' 'SUnreclaim: 690980 kB' 'KernelStack: 21872 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 9172664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214416 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.097 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.098 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42623544 kB' 'MemAvailable: 46344112 kB' 'Buffers: 9316 kB' 'Cached: 11562104 kB' 'SwapCached: 0 kB' 'Active: 8383640 kB' 'Inactive: 3689028 kB' 'Active(anon): 7966668 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504472 kB' 'Mapped: 163128 kB' 'Shmem: 7465420 kB' 'KReclaimable: 222868 kB' 'Slab: 913768 kB' 'SReclaimable: 222868 kB' 'SUnreclaim: 690900 kB' 'KernelStack: 21856 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 9172680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214416 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.099 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.100 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42623544 kB' 'MemAvailable: 46344112 kB' 'Buffers: 9316 kB' 'Cached: 11562120 kB' 'SwapCached: 0 kB' 'Active: 8383760 kB' 'Inactive: 3689028 kB' 'Active(anon): 7966788 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504580 kB' 'Mapped: 163128 kB' 'Shmem: 7465436 kB' 'KReclaimable: 222868 kB' 'Slab: 913760 kB' 'SReclaimable: 222868 kB' 'SUnreclaim: 690892 kB' 'KernelStack: 21856 kB' 'PageTables: 7688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 9172464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214400 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.101 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.102 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:29.103 nr_hugepages=1024 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:29.103 resv_hugepages=0 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:29.103 surplus_hugepages=0 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:29.103 anon_hugepages=0 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283760 kB' 'MemFree: 42623720 kB' 'MemAvailable: 46344288 kB' 'Buffers: 9316 kB' 'Cached: 11562144 kB' 'SwapCached: 0 kB' 'Active: 8383572 kB' 'Inactive: 3689028 kB' 'Active(anon): 7966600 kB' 'Inactive(anon): 0 kB' 'Active(file): 416972 kB' 'Inactive(file): 3689028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504364 kB' 'Mapped: 163128 kB' 'Shmem: 7465460 kB' 'KReclaimable: 222868 kB' 'Slab: 913760 kB' 'SReclaimable: 222868 kB' 'SUnreclaim: 690892 kB' 'KernelStack: 21840 kB' 'PageTables: 7620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 9172488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214400 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603508 kB' 'DirectMap2M: 12713984 kB' 'DirectMap1G: 56623104 kB' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.103 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.104 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26235284 kB' 'MemUsed: 6350084 kB' 'SwapCached: 0 kB' 'Active: 2730656 kB' 'Inactive: 88372 kB' 'Active(anon): 2492104 kB' 'Inactive(anon): 0 kB' 'Active(file): 238552 kB' 'Inactive(file): 88372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2601140 kB' 'Mapped: 93088 kB' 'AnonPages: 221020 kB' 'Shmem: 2274216 kB' 'KernelStack: 12632 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125332 kB' 'Slab: 431084 kB' 'SReclaimable: 125332 kB' 'SUnreclaim: 305752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.105 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:29.106 node0=1024 expecting 1024 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:29.106 00:04:29.106 real 0m6.867s 00:04:29.106 user 0m2.563s 00:04:29.106 sys 0m4.417s 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.106 06:42:36 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.106 ************************************ 00:04:29.106 END TEST no_shrink_alloc 00:04:29.106 ************************************ 00:04:29.106 06:42:36 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:04:29.106 06:42:36 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:29.106 06:42:36 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:29.106 06:42:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.106 06:42:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:29.106 06:42:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.106 06:42:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:29.106 06:42:36 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:29.106 06:42:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.106 06:42:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:29.106 06:42:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.106 06:42:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:29.106 06:42:36 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:29.106 06:42:36 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:29.106 00:04:29.106 real 0m22.553s 00:04:29.106 user 0m7.706s 00:04:29.106 sys 0m13.392s 00:04:29.106 06:42:36 setup.sh.hugepages -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.106 06:42:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.106 ************************************ 00:04:29.106 END TEST hugepages 00:04:29.106 ************************************ 00:04:29.366 06:42:36 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:29.366 06:42:36 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.366 06:42:36 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.366 06:42:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:29.366 ************************************ 00:04:29.366 START TEST driver 00:04:29.366 ************************************ 00:04:29.366 06:42:36 setup.sh.driver -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:29.366 * Looking for test storage... 00:04:29.366 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:29.366 06:42:36 setup.sh.driver -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:29.366 06:42:36 setup.sh.driver -- common/autotest_common.sh@1711 -- # lcov --version 00:04:29.366 06:42:36 setup.sh.driver -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:29.366 06:42:36 setup.sh.driver -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.366 06:42:36 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:04:29.366 06:42:36 setup.sh.driver -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.366 06:42:36 setup.sh.driver -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:29.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.366 --rc genhtml_branch_coverage=1 00:04:29.366 --rc genhtml_function_coverage=1 00:04:29.366 --rc genhtml_legend=1 00:04:29.366 --rc geninfo_all_blocks=1 00:04:29.366 --rc geninfo_unexecuted_blocks=1 00:04:29.366 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:29.366 ' 00:04:29.366 06:42:36 setup.sh.driver -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:29.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.366 --rc genhtml_branch_coverage=1 00:04:29.366 --rc genhtml_function_coverage=1 00:04:29.366 --rc genhtml_legend=1 00:04:29.366 --rc geninfo_all_blocks=1 00:04:29.366 --rc geninfo_unexecuted_blocks=1 00:04:29.366 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:29.366 ' 00:04:29.366 06:42:36 setup.sh.driver -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:29.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.367 --rc genhtml_branch_coverage=1 00:04:29.367 --rc genhtml_function_coverage=1 00:04:29.367 --rc genhtml_legend=1 00:04:29.367 --rc geninfo_all_blocks=1 00:04:29.367 --rc geninfo_unexecuted_blocks=1 00:04:29.367 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:29.367 ' 00:04:29.367 06:42:36 setup.sh.driver -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:29.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.367 --rc genhtml_branch_coverage=1 00:04:29.367 --rc genhtml_function_coverage=1 00:04:29.367 --rc genhtml_legend=1 00:04:29.367 --rc geninfo_all_blocks=1 00:04:29.367 --rc geninfo_unexecuted_blocks=1 00:04:29.367 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:29.367 ' 00:04:29.367 06:42:36 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:29.367 06:42:36 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.367 06:42:36 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:34.642 06:42:41 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:34.642 06:42:41 setup.sh.driver -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.642 06:42:41 setup.sh.driver -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.642 06:42:41 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:34.642 ************************************ 00:04:34.642 START TEST guess_driver 00:04:34.642 ************************************ 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@1129 -- # guess_driver 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:34.642 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:34.642 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:34.642 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:34.642 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:34.642 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:34.642 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:34.642 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:34.642 Looking for driver=vfio-pci 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.642 06:42:41 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.246 06:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.152 06:42:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.152 06:42:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.152 06:42:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.152 06:42:46 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:39.152 06:42:46 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:39.153 06:42:46 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.153 06:42:46 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:43.347 00:04:43.347 real 0m9.052s 00:04:43.347 user 0m2.132s 00:04:43.347 sys 0m4.505s 00:04:43.347 06:42:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.347 06:42:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:43.347 ************************************ 00:04:43.347 END TEST guess_driver 00:04:43.347 ************************************ 00:04:43.347 00:04:43.347 real 0m14.052s 00:04:43.347 user 0m3.566s 00:04:43.347 sys 0m7.313s 00:04:43.347 06:42:50 setup.sh.driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.347 06:42:50 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:43.347 ************************************ 00:04:43.347 END TEST driver 00:04:43.347 ************************************ 00:04:43.347 06:42:50 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:43.347 06:42:50 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.347 06:42:50 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.347 06:42:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:43.347 ************************************ 00:04:43.347 START TEST devices 00:04:43.347 ************************************ 00:04:43.347 06:42:50 setup.sh.devices -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:43.606 * Looking for test storage... 00:04:43.606 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:43.606 06:42:50 setup.sh.devices -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:43.606 06:42:50 setup.sh.devices -- common/autotest_common.sh@1711 -- # lcov --version 00:04:43.606 06:42:50 setup.sh.devices -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:43.606 06:42:50 setup.sh.devices -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.606 06:42:50 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:04:43.606 06:42:50 setup.sh.devices -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.606 06:42:50 setup.sh.devices -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:43.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.606 --rc genhtml_branch_coverage=1 00:04:43.606 --rc genhtml_function_coverage=1 00:04:43.606 --rc genhtml_legend=1 00:04:43.606 --rc geninfo_all_blocks=1 00:04:43.606 --rc geninfo_unexecuted_blocks=1 00:04:43.606 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:43.606 ' 00:04:43.606 06:42:50 setup.sh.devices -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:43.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.606 --rc genhtml_branch_coverage=1 00:04:43.606 --rc genhtml_function_coverage=1 00:04:43.606 --rc genhtml_legend=1 00:04:43.606 --rc geninfo_all_blocks=1 00:04:43.606 --rc geninfo_unexecuted_blocks=1 00:04:43.606 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:43.606 ' 00:04:43.606 06:42:50 setup.sh.devices -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:43.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.606 --rc genhtml_branch_coverage=1 00:04:43.606 --rc genhtml_function_coverage=1 00:04:43.606 --rc genhtml_legend=1 00:04:43.606 --rc geninfo_all_blocks=1 00:04:43.606 --rc geninfo_unexecuted_blocks=1 00:04:43.606 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:43.606 ' 00:04:43.606 06:42:50 setup.sh.devices -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:43.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.606 --rc genhtml_branch_coverage=1 00:04:43.606 --rc genhtml_function_coverage=1 00:04:43.606 --rc genhtml_legend=1 00:04:43.606 --rc geninfo_all_blocks=1 00:04:43.606 --rc geninfo_unexecuted_blocks=1 00:04:43.606 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:43.606 ' 00:04:43.606 06:42:50 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:43.606 06:42:50 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:43.606 06:42:50 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.606 06:42:50 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:47.803 06:42:54 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:47.803 06:42:54 setup.sh.devices -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:47.803 06:42:54 setup.sh.devices -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:47.803 06:42:54 setup.sh.devices -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:47.803 06:42:54 setup.sh.devices -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:47.803 06:42:54 setup.sh.devices -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:47.803 06:42:54 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:47.803 06:42:54 setup.sh.devices -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:04:47.803 06:42:54 setup.sh.devices -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:47.803 06:42:54 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:47.803 06:42:54 setup.sh.devices -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:47.803 06:42:54 setup.sh.devices -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:47.803 06:42:54 setup.sh.devices -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:47.803 06:42:54 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:47.803 06:42:54 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:47.803 06:42:54 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:47.803 06:42:54 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:47.803 06:42:54 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:47.803 06:42:54 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:47.803 06:42:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:47.803 06:42:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:47.803 06:42:54 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:47.803 06:42:54 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:47.803 06:42:54 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:47.803 06:42:54 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:04:47.803 06:42:54 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:47.803 No valid GPT data, bailing 00:04:47.803 06:42:54 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:47.803 06:42:54 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:04:47.803 06:42:54 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:04:47.803 06:42:54 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:47.803 06:42:54 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:47.803 06:42:54 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:47.803 06:42:54 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:47.803 06:42:54 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:47.803 06:42:54 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:47.803 06:42:54 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:47.803 06:42:54 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:47.803 06:42:54 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:47.803 06:42:54 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:47.803 06:42:54 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.803 06:42:54 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.803 06:42:54 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:47.803 ************************************ 00:04:47.803 START TEST nvme_mount 00:04:47.803 ************************************ 00:04:47.803 06:42:54 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1129 -- # nvme_mount 00:04:47.803 06:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:47.804 06:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:47.804 06:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.804 06:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.804 06:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:47.804 06:42:54 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:47.804 06:42:54 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:47.804 06:42:54 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:47.804 06:42:54 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:47.804 06:42:54 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:47.804 06:42:54 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:47.804 06:42:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:47.804 06:42:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.804 06:42:54 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:47.804 06:42:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:47.804 06:42:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.804 06:42:54 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:47.804 06:42:54 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:47.804 06:42:54 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:48.742 Creating new GPT entries in memory. 00:04:48.742 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:48.742 other utilities. 00:04:48.742 06:42:55 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:48.742 06:42:55 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.742 06:42:55 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:48.742 06:42:55 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:48.742 06:42:55 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:49.680 Creating new GPT entries in memory. 00:04:49.680 The operation has completed successfully. 00:04:49.680 06:42:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:49.680 06:42:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:49.680 06:42:56 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1123127 00:04:49.680 06:42:57 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.680 06:42:57 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:49.680 06:42:57 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.680 06:42:57 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:49.680 06:42:57 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:49.680 06:42:57 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.680 06:42:57 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.680 06:42:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:49.680 06:42:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:49.680 06:42:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.680 06:42:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.680 06:42:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:49.680 06:42:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:49.680 06:42:57 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:49.680 06:42:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:49.680 06:42:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.680 06:42:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:49.680 06:42:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:49.680 06:42:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.680 06:42:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:52.973 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.973 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:53.232 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:53.232 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:53.232 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:53.232 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:53.232 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:53.232 06:43:00 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:53.232 06:43:00 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.232 06:43:00 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:53.232 06:43:00 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:53.232 06:43:00 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.232 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:53.232 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:53.232 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:53.232 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.232 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:53.232 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:53.232 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.232 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:53.232 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:53.232 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.232 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:53.232 06:43:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:53.232 06:43:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.233 06:43:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.521 06:43:03 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:59.807 06:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.807 06:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.807 06:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:59.807 06:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:59.807 06:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:59.807 06:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.807 06:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:59.807 06:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:59.807 06:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:59.807 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:59.807 00:04:59.807 real 0m12.231s 00:04:59.807 user 0m3.638s 00:04:59.807 sys 0m6.508s 00:04:59.807 06:43:07 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.807 06:43:07 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:59.807 ************************************ 00:04:59.807 END TEST nvme_mount 00:04:59.807 ************************************ 00:04:59.807 06:43:07 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:59.807 06:43:07 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.807 06:43:07 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.807 06:43:07 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:59.807 ************************************ 00:04:59.807 START TEST dm_mount 00:04:59.807 ************************************ 00:04:59.807 06:43:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@1129 -- # dm_mount 00:04:59.807 06:43:07 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:59.807 06:43:07 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:59.807 06:43:07 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:59.807 06:43:07 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:59.807 06:43:07 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:59.807 06:43:07 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:59.807 06:43:07 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:59.807 06:43:07 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:59.807 06:43:07 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:59.807 06:43:07 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:59.808 06:43:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:59.808 06:43:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:59.808 06:43:07 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:59.808 06:43:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:59.808 06:43:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:59.808 06:43:07 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:59.808 06:43:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:59.808 06:43:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:59.808 06:43:07 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:59.808 06:43:07 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:59.808 06:43:07 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:00.744 Creating new GPT entries in memory. 00:05:00.744 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:00.744 other utilities. 00:05:00.744 06:43:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:00.744 06:43:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.744 06:43:08 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:00.744 06:43:08 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:00.744 06:43:08 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:02.122 Creating new GPT entries in memory. 00:05:02.122 The operation has completed successfully. 00:05:02.122 06:43:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:02.122 06:43:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.122 06:43:09 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:02.122 06:43:09 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:02.122 06:43:09 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:03.060 The operation has completed successfully. 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1127547 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.060 06:43:10 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:06.350 06:43:13 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.351 06:43:13 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:09.644 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:09.644 00:05:09.644 real 0m9.613s 00:05:09.644 user 0m2.121s 00:05:09.644 sys 0m4.523s 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.644 06:43:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:09.644 ************************************ 00:05:09.644 END TEST dm_mount 00:05:09.644 ************************************ 00:05:09.644 06:43:16 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:09.644 06:43:16 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:09.644 06:43:16 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:09.644 06:43:16 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:09.644 06:43:16 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:09.644 06:43:16 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:09.644 06:43:16 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:09.644 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:09.644 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:05:09.644 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:09.644 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:09.644 06:43:17 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:09.644 06:43:17 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:09.645 06:43:17 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:09.645 06:43:17 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:09.645 06:43:17 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:09.645 06:43:17 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:09.645 06:43:17 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:09.645 00:05:09.645 real 0m26.339s 00:05:09.645 user 0m7.322s 00:05:09.645 sys 0m13.894s 00:05:09.645 06:43:17 setup.sh.devices -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.645 06:43:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:09.645 ************************************ 00:05:09.645 END TEST devices 00:05:09.645 ************************************ 00:05:09.905 00:05:09.905 real 1m26.220s 00:05:09.905 user 0m25.639s 00:05:09.905 sys 0m48.709s 00:05:09.905 06:43:17 setup.sh -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.905 06:43:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:09.905 ************************************ 00:05:09.905 END TEST setup.sh 00:05:09.905 ************************************ 00:05:09.905 06:43:17 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:13.190 Hugepages 00:05:13.190 node hugesize free / total 00:05:13.190 node0 1048576kB 0 / 0 00:05:13.190 node0 2048kB 1024 / 1024 00:05:13.190 node1 1048576kB 0 / 0 00:05:13.190 node1 2048kB 1024 / 1024 00:05:13.190 00:05:13.190 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:13.190 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:13.190 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:13.190 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:13.190 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:13.190 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:13.190 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:13.190 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:13.190 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:13.190 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:13.190 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:13.190 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:13.190 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:13.190 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:13.190 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:13.190 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:13.190 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:13.190 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:13.190 06:43:20 -- spdk/autotest.sh@117 -- # uname -s 00:05:13.190 06:43:20 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:13.190 06:43:20 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:13.190 06:43:20 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:16.477 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:16.477 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:16.477 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:16.477 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:16.477 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:16.477 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:16.477 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:16.477 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:16.477 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:16.477 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:16.477 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:16.477 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:16.477 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:16.477 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:16.477 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:16.477 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:18.383 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:18.383 06:43:25 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:19.321 06:43:26 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:19.321 06:43:26 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:19.321 06:43:26 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:19.321 06:43:26 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:19.321 06:43:26 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:19.321 06:43:26 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:19.321 06:43:26 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:19.321 06:43:26 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:19.321 06:43:26 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:19.321 06:43:26 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:19.321 06:43:26 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:05:19.321 06:43:26 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:22.612 Waiting for block devices as requested 00:05:22.612 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:22.612 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:22.612 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:22.612 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:22.612 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:22.612 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:22.612 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:22.612 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:22.612 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:22.869 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:22.869 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:22.869 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:23.127 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:23.127 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:23.127 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:23.386 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:23.386 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:23.645 06:43:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:23.645 06:43:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:23.645 06:43:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:23.645 06:43:30 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:05:23.645 06:43:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:23.645 06:43:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:23.645 06:43:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:23.645 06:43:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:23.645 06:43:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:23.645 06:43:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:23.645 06:43:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:23.645 06:43:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:23.645 06:43:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:23.645 06:43:30 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:05:23.645 06:43:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:23.645 06:43:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:23.645 06:43:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:23.645 06:43:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:23.645 06:43:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:23.645 06:43:31 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:23.645 06:43:31 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:23.645 06:43:31 -- common/autotest_common.sh@1543 -- # continue 00:05:23.645 06:43:31 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:23.645 06:43:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.645 06:43:31 -- common/autotest_common.sh@10 -- # set +x 00:05:23.645 06:43:31 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:23.645 06:43:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.645 06:43:31 -- common/autotest_common.sh@10 -- # set +x 00:05:23.645 06:43:31 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:27.012 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:27.012 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:27.012 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:27.012 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:27.012 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:27.012 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:27.012 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:27.012 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:27.012 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:27.012 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:27.012 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:27.012 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:27.012 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:27.012 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:27.012 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:27.012 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:28.385 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:28.642 06:43:35 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:28.642 06:43:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:28.642 06:43:35 -- common/autotest_common.sh@10 -- # set +x 00:05:28.642 06:43:35 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:28.642 06:43:35 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:28.642 06:43:35 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:28.642 06:43:35 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:28.642 06:43:35 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:28.642 06:43:35 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:28.642 06:43:35 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:28.642 06:43:35 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:28.642 06:43:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:28.642 06:43:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:28.642 06:43:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:28.642 06:43:36 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:28.642 06:43:36 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:28.642 06:43:36 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:28.642 06:43:36 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:05:28.642 06:43:36 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:28.642 06:43:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:28.642 06:43:36 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:28.642 06:43:36 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:28.642 06:43:36 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:28.642 06:43:36 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:28.642 06:43:36 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:05:28.642 06:43:36 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:05:28.642 06:43:36 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1137073 00:05:28.642 06:43:36 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.642 06:43:36 -- common/autotest_common.sh@1585 -- # waitforlisten 1137073 00:05:28.642 06:43:36 -- common/autotest_common.sh@835 -- # '[' -z 1137073 ']' 00:05:28.642 06:43:36 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.642 06:43:36 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.642 06:43:36 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.642 06:43:36 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.642 06:43:36 -- common/autotest_common.sh@10 -- # set +x 00:05:28.642 [2024-12-12 06:43:36.138530] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:28.642 [2024-12-12 06:43:36.138611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137073 ] 00:05:28.900 [2024-12-12 06:43:36.210011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.900 [2024-12-12 06:43:36.252527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.158 06:43:36 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.158 06:43:36 -- common/autotest_common.sh@868 -- # return 0 00:05:29.158 06:43:36 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:29.158 06:43:36 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:29.158 06:43:36 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:32.435 nvme0n1 00:05:32.435 06:43:39 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:32.435 [2024-12-12 06:43:39.671740] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:32.435 request: 00:05:32.435 { 00:05:32.435 "nvme_ctrlr_name": "nvme0", 00:05:32.435 "password": "test", 00:05:32.435 "method": "bdev_nvme_opal_revert", 00:05:32.435 "req_id": 1 00:05:32.435 } 00:05:32.435 Got JSON-RPC error response 00:05:32.435 response: 00:05:32.435 { 00:05:32.435 "code": -32602, 00:05:32.435 "message": "Invalid parameters" 00:05:32.435 } 00:05:32.435 06:43:39 -- common/autotest_common.sh@1591 -- # true 00:05:32.435 06:43:39 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:32.435 06:43:39 -- common/autotest_common.sh@1595 -- # killprocess 1137073 00:05:32.435 06:43:39 -- common/autotest_common.sh@954 -- # '[' -z 1137073 ']' 00:05:32.435 06:43:39 -- common/autotest_common.sh@958 -- # kill -0 1137073 00:05:32.435 06:43:39 -- common/autotest_common.sh@959 -- # uname 00:05:32.435 06:43:39 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.435 06:43:39 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1137073 00:05:32.435 06:43:39 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.435 06:43:39 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.435 06:43:39 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1137073' 00:05:32.435 killing process with pid 1137073 00:05:32.435 06:43:39 -- common/autotest_common.sh@973 -- # kill 1137073 00:05:32.435 06:43:39 -- common/autotest_common.sh@978 -- # wait 1137073 00:05:34.963 06:43:41 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:34.963 06:43:41 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:34.963 06:43:41 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:34.963 06:43:41 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:34.963 06:43:41 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:34.963 06:43:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.963 06:43:41 -- common/autotest_common.sh@10 -- # set +x 00:05:34.963 06:43:41 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:34.963 06:43:41 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:34.963 06:43:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.963 06:43:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.963 06:43:41 -- common/autotest_common.sh@10 -- # set +x 00:05:34.963 ************************************ 00:05:34.963 START TEST env 00:05:34.963 ************************************ 00:05:34.963 06:43:41 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:34.963 * Looking for test storage... 00:05:34.963 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:34.963 06:43:42 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:34.963 06:43:42 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:34.963 06:43:42 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:34.963 06:43:42 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:34.963 06:43:42 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.963 06:43:42 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.963 06:43:42 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.963 06:43:42 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.963 06:43:42 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.963 06:43:42 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.963 06:43:42 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.963 06:43:42 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.963 06:43:42 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.963 06:43:42 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.963 06:43:42 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.963 06:43:42 env -- scripts/common.sh@344 -- # case "$op" in 00:05:34.963 06:43:42 env -- scripts/common.sh@345 -- # : 1 00:05:34.963 06:43:42 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.963 06:43:42 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.963 06:43:42 env -- scripts/common.sh@365 -- # decimal 1 00:05:34.963 06:43:42 env -- scripts/common.sh@353 -- # local d=1 00:05:34.963 06:43:42 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.963 06:43:42 env -- scripts/common.sh@355 -- # echo 1 00:05:34.963 06:43:42 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.963 06:43:42 env -- scripts/common.sh@366 -- # decimal 2 00:05:34.963 06:43:42 env -- scripts/common.sh@353 -- # local d=2 00:05:34.963 06:43:42 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.963 06:43:42 env -- scripts/common.sh@355 -- # echo 2 00:05:34.963 06:43:42 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.963 06:43:42 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.963 06:43:42 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.963 06:43:42 env -- scripts/common.sh@368 -- # return 0 00:05:34.963 06:43:42 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.963 06:43:42 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:34.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.963 --rc genhtml_branch_coverage=1 00:05:34.963 --rc genhtml_function_coverage=1 00:05:34.963 --rc genhtml_legend=1 00:05:34.963 --rc geninfo_all_blocks=1 00:05:34.963 --rc geninfo_unexecuted_blocks=1 00:05:34.963 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:34.963 ' 00:05:34.963 06:43:42 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:34.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.963 --rc genhtml_branch_coverage=1 00:05:34.963 --rc genhtml_function_coverage=1 00:05:34.963 --rc genhtml_legend=1 00:05:34.963 --rc geninfo_all_blocks=1 00:05:34.963 --rc geninfo_unexecuted_blocks=1 00:05:34.963 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:34.963 ' 00:05:34.963 06:43:42 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:34.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.963 --rc genhtml_branch_coverage=1 00:05:34.963 --rc genhtml_function_coverage=1 00:05:34.963 --rc genhtml_legend=1 00:05:34.963 --rc geninfo_all_blocks=1 00:05:34.963 --rc geninfo_unexecuted_blocks=1 00:05:34.963 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:34.963 ' 00:05:34.963 06:43:42 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:34.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.963 --rc genhtml_branch_coverage=1 00:05:34.963 --rc genhtml_function_coverage=1 00:05:34.963 --rc genhtml_legend=1 00:05:34.963 --rc geninfo_all_blocks=1 00:05:34.963 --rc geninfo_unexecuted_blocks=1 00:05:34.963 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:34.963 ' 00:05:34.963 06:43:42 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:34.963 06:43:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.963 06:43:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.963 06:43:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.963 ************************************ 00:05:34.963 START TEST env_memory 00:05:34.963 ************************************ 00:05:34.963 06:43:42 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:34.963 00:05:34.963 00:05:34.963 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.963 http://cunit.sourceforge.net/ 00:05:34.963 00:05:34.963 00:05:34.963 Suite: memory 00:05:34.963 Test: alloc and free memory map ...[2024-12-12 06:43:42.222455] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:34.963 passed 00:05:34.963 Test: mem map translation ...[2024-12-12 06:43:42.235152] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:34.963 [2024-12-12 06:43:42.235168] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:34.963 [2024-12-12 06:43:42.235202] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:34.963 [2024-12-12 06:43:42.235211] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:34.963 passed 00:05:34.963 Test: mem map registration ...[2024-12-12 06:43:42.255640] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:34.963 [2024-12-12 06:43:42.255656] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:34.963 passed 00:05:34.963 Test: mem map adjacent registrations ...passed 00:05:34.963 00:05:34.963 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.963 suites 1 1 n/a 0 0 00:05:34.963 tests 4 4 4 0 0 00:05:34.963 asserts 152 152 152 0 n/a 00:05:34.963 00:05:34.963 Elapsed time = 0.076 seconds 00:05:34.963 00:05:34.963 real 0m0.083s 00:05:34.963 user 0m0.078s 00:05:34.963 sys 0m0.005s 00:05:34.963 06:43:42 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.963 06:43:42 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:34.963 ************************************ 00:05:34.963 END TEST env_memory 00:05:34.963 ************************************ 00:05:34.963 06:43:42 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:34.963 06:43:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.963 06:43:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.963 06:43:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.963 ************************************ 00:05:34.963 START TEST env_vtophys 00:05:34.963 ************************************ 00:05:34.963 06:43:42 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:34.963 EAL: lib.eal log level changed from notice to debug 00:05:34.963 EAL: Detected lcore 0 as core 0 on socket 0 00:05:34.963 EAL: Detected lcore 1 as core 1 on socket 0 00:05:34.963 EAL: Detected lcore 2 as core 2 on socket 0 00:05:34.963 EAL: Detected lcore 3 as core 3 on socket 0 00:05:34.963 EAL: Detected lcore 4 as core 4 on socket 0 00:05:34.963 EAL: Detected lcore 5 as core 5 on socket 0 00:05:34.963 EAL: Detected lcore 6 as core 6 on socket 0 00:05:34.963 EAL: Detected lcore 7 as core 8 on socket 0 00:05:34.963 EAL: Detected lcore 8 as core 9 on socket 0 00:05:34.963 EAL: Detected lcore 9 as core 10 on socket 0 00:05:34.963 EAL: Detected lcore 10 as core 11 on socket 0 00:05:34.963 EAL: Detected lcore 11 as core 12 on socket 0 00:05:34.963 EAL: Detected lcore 12 as core 13 on socket 0 00:05:34.963 EAL: Detected lcore 13 as core 14 on socket 0 00:05:34.964 EAL: Detected lcore 14 as core 16 on socket 0 00:05:34.964 EAL: Detected lcore 15 as core 17 on socket 0 00:05:34.964 EAL: Detected lcore 16 as core 18 on socket 0 00:05:34.964 EAL: Detected lcore 17 as core 19 on socket 0 00:05:34.964 EAL: Detected lcore 18 as core 20 on socket 0 00:05:34.964 EAL: Detected lcore 19 as core 21 on socket 0 00:05:34.964 EAL: Detected lcore 20 as core 22 on socket 0 00:05:34.964 EAL: Detected lcore 21 as core 24 on socket 0 00:05:34.964 EAL: Detected lcore 22 as core 25 on socket 0 00:05:34.964 EAL: Detected lcore 23 as core 26 on socket 0 00:05:34.964 EAL: Detected lcore 24 as core 27 on socket 0 00:05:34.964 EAL: Detected lcore 25 as core 28 on socket 0 00:05:34.964 EAL: Detected lcore 26 as core 29 on socket 0 00:05:34.964 EAL: Detected lcore 27 as core 30 on socket 0 00:05:34.964 EAL: Detected lcore 28 as core 0 on socket 1 00:05:34.964 EAL: Detected lcore 29 as core 1 on socket 1 00:05:34.964 EAL: Detected lcore 30 as core 2 on socket 1 00:05:34.964 EAL: Detected lcore 31 as core 3 on socket 1 00:05:34.964 EAL: Detected lcore 32 as core 4 on socket 1 00:05:34.964 EAL: Detected lcore 33 as core 5 on socket 1 00:05:34.964 EAL: Detected lcore 34 as core 6 on socket 1 00:05:34.964 EAL: Detected lcore 35 as core 8 on socket 1 00:05:34.964 EAL: Detected lcore 36 as core 9 on socket 1 00:05:34.964 EAL: Detected lcore 37 as core 10 on socket 1 00:05:34.964 EAL: Detected lcore 38 as core 11 on socket 1 00:05:34.964 EAL: Detected lcore 39 as core 12 on socket 1 00:05:34.964 EAL: Detected lcore 40 as core 13 on socket 1 00:05:34.964 EAL: Detected lcore 41 as core 14 on socket 1 00:05:34.964 EAL: Detected lcore 42 as core 16 on socket 1 00:05:34.964 EAL: Detected lcore 43 as core 17 on socket 1 00:05:34.964 EAL: Detected lcore 44 as core 18 on socket 1 00:05:34.964 EAL: Detected lcore 45 as core 19 on socket 1 00:05:34.964 EAL: Detected lcore 46 as core 20 on socket 1 00:05:34.964 EAL: Detected lcore 47 as core 21 on socket 1 00:05:34.964 EAL: Detected lcore 48 as core 22 on socket 1 00:05:34.964 EAL: Detected lcore 49 as core 24 on socket 1 00:05:34.964 EAL: Detected lcore 50 as core 25 on socket 1 00:05:34.964 EAL: Detected lcore 51 as core 26 on socket 1 00:05:34.964 EAL: Detected lcore 52 as core 27 on socket 1 00:05:34.964 EAL: Detected lcore 53 as core 28 on socket 1 00:05:34.964 EAL: Detected lcore 54 as core 29 on socket 1 00:05:34.964 EAL: Detected lcore 55 as core 30 on socket 1 00:05:34.964 EAL: Detected lcore 56 as core 0 on socket 0 00:05:34.964 EAL: Detected lcore 57 as core 1 on socket 0 00:05:34.964 EAL: Detected lcore 58 as core 2 on socket 0 00:05:34.964 EAL: Detected lcore 59 as core 3 on socket 0 00:05:34.964 EAL: Detected lcore 60 as core 4 on socket 0 00:05:34.964 EAL: Detected lcore 61 as core 5 on socket 0 00:05:34.964 EAL: Detected lcore 62 as core 6 on socket 0 00:05:34.964 EAL: Detected lcore 63 as core 8 on socket 0 00:05:34.964 EAL: Detected lcore 64 as core 9 on socket 0 00:05:34.964 EAL: Detected lcore 65 as core 10 on socket 0 00:05:34.964 EAL: Detected lcore 66 as core 11 on socket 0 00:05:34.964 EAL: Detected lcore 67 as core 12 on socket 0 00:05:34.964 EAL: Detected lcore 68 as core 13 on socket 0 00:05:34.964 EAL: Detected lcore 69 as core 14 on socket 0 00:05:34.964 EAL: Detected lcore 70 as core 16 on socket 0 00:05:34.964 EAL: Detected lcore 71 as core 17 on socket 0 00:05:34.964 EAL: Detected lcore 72 as core 18 on socket 0 00:05:34.964 EAL: Detected lcore 73 as core 19 on socket 0 00:05:34.964 EAL: Detected lcore 74 as core 20 on socket 0 00:05:34.964 EAL: Detected lcore 75 as core 21 on socket 0 00:05:34.964 EAL: Detected lcore 76 as core 22 on socket 0 00:05:34.964 EAL: Detected lcore 77 as core 24 on socket 0 00:05:34.964 EAL: Detected lcore 78 as core 25 on socket 0 00:05:34.964 EAL: Detected lcore 79 as core 26 on socket 0 00:05:34.964 EAL: Detected lcore 80 as core 27 on socket 0 00:05:34.964 EAL: Detected lcore 81 as core 28 on socket 0 00:05:34.964 EAL: Detected lcore 82 as core 29 on socket 0 00:05:34.964 EAL: Detected lcore 83 as core 30 on socket 0 00:05:34.964 EAL: Detected lcore 84 as core 0 on socket 1 00:05:34.964 EAL: Detected lcore 85 as core 1 on socket 1 00:05:34.964 EAL: Detected lcore 86 as core 2 on socket 1 00:05:34.964 EAL: Detected lcore 87 as core 3 on socket 1 00:05:34.964 EAL: Detected lcore 88 as core 4 on socket 1 00:05:34.964 EAL: Detected lcore 89 as core 5 on socket 1 00:05:34.964 EAL: Detected lcore 90 as core 6 on socket 1 00:05:34.964 EAL: Detected lcore 91 as core 8 on socket 1 00:05:34.964 EAL: Detected lcore 92 as core 9 on socket 1 00:05:34.964 EAL: Detected lcore 93 as core 10 on socket 1 00:05:34.964 EAL: Detected lcore 94 as core 11 on socket 1 00:05:34.964 EAL: Detected lcore 95 as core 12 on socket 1 00:05:34.964 EAL: Detected lcore 96 as core 13 on socket 1 00:05:34.964 EAL: Detected lcore 97 as core 14 on socket 1 00:05:34.964 EAL: Detected lcore 98 as core 16 on socket 1 00:05:34.964 EAL: Detected lcore 99 as core 17 on socket 1 00:05:34.964 EAL: Detected lcore 100 as core 18 on socket 1 00:05:34.964 EAL: Detected lcore 101 as core 19 on socket 1 00:05:34.964 EAL: Detected lcore 102 as core 20 on socket 1 00:05:34.964 EAL: Detected lcore 103 as core 21 on socket 1 00:05:34.964 EAL: Detected lcore 104 as core 22 on socket 1 00:05:34.964 EAL: Detected lcore 105 as core 24 on socket 1 00:05:34.964 EAL: Detected lcore 106 as core 25 on socket 1 00:05:34.964 EAL: Detected lcore 107 as core 26 on socket 1 00:05:34.964 EAL: Detected lcore 108 as core 27 on socket 1 00:05:34.964 EAL: Detected lcore 109 as core 28 on socket 1 00:05:34.964 EAL: Detected lcore 110 as core 29 on socket 1 00:05:34.964 EAL: Detected lcore 111 as core 30 on socket 1 00:05:34.964 EAL: Maximum logical cores by configuration: 128 00:05:34.964 EAL: Detected CPU lcores: 112 00:05:34.964 EAL: Detected NUMA nodes: 2 00:05:34.964 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:34.964 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:34.964 EAL: Checking presence of .so 'librte_eal.so' 00:05:34.964 EAL: Detected static linkage of DPDK 00:05:34.964 EAL: No shared files mode enabled, IPC will be disabled 00:05:34.964 EAL: Bus pci wants IOVA as 'DC' 00:05:34.964 EAL: Buses did not request a specific IOVA mode. 00:05:34.964 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:34.964 EAL: Selected IOVA mode 'VA' 00:05:34.964 EAL: Probing VFIO support... 00:05:34.964 EAL: IOMMU type 1 (Type 1) is supported 00:05:34.964 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:34.964 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:34.964 EAL: VFIO support initialized 00:05:34.964 EAL: Ask a virtual area of 0x2e000 bytes 00:05:34.964 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:34.964 EAL: Setting up physically contiguous memory... 00:05:34.964 EAL: Setting maximum number of open files to 524288 00:05:34.964 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:34.964 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:34.964 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:34.964 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.964 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:34.964 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.964 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.964 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:34.964 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:34.964 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.964 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:34.964 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.964 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.964 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:34.964 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:34.964 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.964 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:34.964 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.964 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.964 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:34.964 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:34.964 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.964 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:34.964 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.964 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.964 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:34.964 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:34.964 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:34.964 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.964 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:34.964 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:34.964 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.964 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:34.964 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:34.964 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.964 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:34.964 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:34.964 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.964 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:34.964 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:34.964 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.964 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:34.964 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:34.964 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.964 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:34.964 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:34.964 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.964 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:34.964 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:34.964 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.964 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:34.964 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:34.964 EAL: Hugepages will be freed exactly as allocated. 00:05:34.964 EAL: No shared files mode enabled, IPC is disabled 00:05:34.964 EAL: No shared files mode enabled, IPC is disabled 00:05:34.964 EAL: TSC frequency is ~2500000 KHz 00:05:34.964 EAL: Main lcore 0 is ready (tid=7ffabc465a00;cpuset=[0]) 00:05:34.964 EAL: Trying to obtain current memory policy. 00:05:34.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.964 EAL: Restoring previous memory policy: 0 00:05:34.964 EAL: request: mp_malloc_sync 00:05:34.964 EAL: No shared files mode enabled, IPC is disabled 00:05:34.964 EAL: Heap on socket 0 was expanded by 2MB 00:05:34.964 EAL: No shared files mode enabled, IPC is disabled 00:05:34.964 EAL: Mem event callback 'spdk:(nil)' registered 00:05:34.964 00:05:34.964 00:05:34.964 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.964 http://cunit.sourceforge.net/ 00:05:34.964 00:05:34.964 00:05:34.964 Suite: components_suite 00:05:34.964 Test: vtophys_malloc_test ...passed 00:05:34.964 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:34.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.965 EAL: Restoring previous memory policy: 4 00:05:34.965 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.965 EAL: request: mp_malloc_sync 00:05:34.965 EAL: No shared files mode enabled, IPC is disabled 00:05:34.965 EAL: Heap on socket 0 was expanded by 4MB 00:05:34.965 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.965 EAL: request: mp_malloc_sync 00:05:34.965 EAL: No shared files mode enabled, IPC is disabled 00:05:34.965 EAL: Heap on socket 0 was shrunk by 4MB 00:05:34.965 EAL: Trying to obtain current memory policy. 00:05:34.965 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.965 EAL: Restoring previous memory policy: 4 00:05:34.965 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.965 EAL: request: mp_malloc_sync 00:05:34.965 EAL: No shared files mode enabled, IPC is disabled 00:05:34.965 EAL: Heap on socket 0 was expanded by 6MB 00:05:34.965 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.965 EAL: request: mp_malloc_sync 00:05:34.965 EAL: No shared files mode enabled, IPC is disabled 00:05:34.965 EAL: Heap on socket 0 was shrunk by 6MB 00:05:34.965 EAL: Trying to obtain current memory policy. 00:05:34.965 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.965 EAL: Restoring previous memory policy: 4 00:05:34.965 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.965 EAL: request: mp_malloc_sync 00:05:34.965 EAL: No shared files mode enabled, IPC is disabled 00:05:34.965 EAL: Heap on socket 0 was expanded by 10MB 00:05:34.965 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.965 EAL: request: mp_malloc_sync 00:05:34.965 EAL: No shared files mode enabled, IPC is disabled 00:05:34.965 EAL: Heap on socket 0 was shrunk by 10MB 00:05:34.965 EAL: Trying to obtain current memory policy. 00:05:34.965 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.965 EAL: Restoring previous memory policy: 4 00:05:34.965 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.965 EAL: request: mp_malloc_sync 00:05:34.965 EAL: No shared files mode enabled, IPC is disabled 00:05:34.965 EAL: Heap on socket 0 was expanded by 18MB 00:05:34.965 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.965 EAL: request: mp_malloc_sync 00:05:34.965 EAL: No shared files mode enabled, IPC is disabled 00:05:34.965 EAL: Heap on socket 0 was shrunk by 18MB 00:05:34.965 EAL: Trying to obtain current memory policy. 00:05:34.965 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.965 EAL: Restoring previous memory policy: 4 00:05:34.965 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.965 EAL: request: mp_malloc_sync 00:05:34.965 EAL: No shared files mode enabled, IPC is disabled 00:05:34.965 EAL: Heap on socket 0 was expanded by 34MB 00:05:34.965 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.965 EAL: request: mp_malloc_sync 00:05:34.965 EAL: No shared files mode enabled, IPC is disabled 00:05:34.965 EAL: Heap on socket 0 was shrunk by 34MB 00:05:34.965 EAL: Trying to obtain current memory policy. 00:05:34.965 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.965 EAL: Restoring previous memory policy: 4 00:05:34.965 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.965 EAL: request: mp_malloc_sync 00:05:34.965 EAL: No shared files mode enabled, IPC is disabled 00:05:34.965 EAL: Heap on socket 0 was expanded by 66MB 00:05:34.965 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.965 EAL: request: mp_malloc_sync 00:05:34.965 EAL: No shared files mode enabled, IPC is disabled 00:05:34.965 EAL: Heap on socket 0 was shrunk by 66MB 00:05:34.965 EAL: Trying to obtain current memory policy. 00:05:34.965 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.222 EAL: Restoring previous memory policy: 4 00:05:35.222 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.222 EAL: request: mp_malloc_sync 00:05:35.222 EAL: No shared files mode enabled, IPC is disabled 00:05:35.222 EAL: Heap on socket 0 was expanded by 130MB 00:05:35.222 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.222 EAL: request: mp_malloc_sync 00:05:35.222 EAL: No shared files mode enabled, IPC is disabled 00:05:35.222 EAL: Heap on socket 0 was shrunk by 130MB 00:05:35.222 EAL: Trying to obtain current memory policy. 00:05:35.222 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.222 EAL: Restoring previous memory policy: 4 00:05:35.222 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.222 EAL: request: mp_malloc_sync 00:05:35.222 EAL: No shared files mode enabled, IPC is disabled 00:05:35.222 EAL: Heap on socket 0 was expanded by 258MB 00:05:35.222 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.222 EAL: request: mp_malloc_sync 00:05:35.222 EAL: No shared files mode enabled, IPC is disabled 00:05:35.222 EAL: Heap on socket 0 was shrunk by 258MB 00:05:35.222 EAL: Trying to obtain current memory policy. 00:05:35.222 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.479 EAL: Restoring previous memory policy: 4 00:05:35.479 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.479 EAL: request: mp_malloc_sync 00:05:35.479 EAL: No shared files mode enabled, IPC is disabled 00:05:35.479 EAL: Heap on socket 0 was expanded by 514MB 00:05:35.479 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.479 EAL: request: mp_malloc_sync 00:05:35.479 EAL: No shared files mode enabled, IPC is disabled 00:05:35.479 EAL: Heap on socket 0 was shrunk by 514MB 00:05:35.479 EAL: Trying to obtain current memory policy. 00:05:35.479 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.736 EAL: Restoring previous memory policy: 4 00:05:35.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.736 EAL: request: mp_malloc_sync 00:05:35.736 EAL: No shared files mode enabled, IPC is disabled 00:05:35.736 EAL: Heap on socket 0 was expanded by 1026MB 00:05:35.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.993 EAL: request: mp_malloc_sync 00:05:35.993 EAL: No shared files mode enabled, IPC is disabled 00:05:35.993 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:35.993 passed 00:05:35.993 00:05:35.993 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.993 suites 1 1 n/a 0 0 00:05:35.993 tests 2 2 2 0 0 00:05:35.993 asserts 497 497 497 0 n/a 00:05:35.993 00:05:35.993 Elapsed time = 0.958 seconds 00:05:35.993 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.993 EAL: request: mp_malloc_sync 00:05:35.993 EAL: No shared files mode enabled, IPC is disabled 00:05:35.993 EAL: Heap on socket 0 was shrunk by 2MB 00:05:35.993 EAL: No shared files mode enabled, IPC is disabled 00:05:35.993 EAL: No shared files mode enabled, IPC is disabled 00:05:35.993 EAL: No shared files mode enabled, IPC is disabled 00:05:35.993 00:05:35.993 real 0m1.061s 00:05:35.993 user 0m0.628s 00:05:35.993 sys 0m0.406s 00:05:35.993 06:43:43 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.993 06:43:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:35.993 ************************************ 00:05:35.993 END TEST env_vtophys 00:05:35.993 ************************************ 00:05:35.993 06:43:43 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:35.993 06:43:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.993 06:43:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.993 06:43:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.993 ************************************ 00:05:35.993 START TEST env_pci 00:05:35.993 ************************************ 00:05:35.993 06:43:43 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:35.993 00:05:35.993 00:05:35.993 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.993 http://cunit.sourceforge.net/ 00:05:35.993 00:05:35.993 00:05:35.993 Suite: pci 00:05:35.993 Test: pci_hook ...[2024-12-12 06:43:43.503185] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1118:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1138363 has claimed it 00:05:36.249 EAL: Cannot find device (10000:00:01.0) 00:05:36.249 EAL: Failed to attach device on primary process 00:05:36.249 passed 00:05:36.249 00:05:36.249 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.249 suites 1 1 n/a 0 0 00:05:36.249 tests 1 1 1 0 0 00:05:36.249 asserts 25 25 25 0 n/a 00:05:36.249 00:05:36.249 Elapsed time = 0.036 seconds 00:05:36.249 00:05:36.249 real 0m0.056s 00:05:36.249 user 0m0.011s 00:05:36.249 sys 0m0.045s 00:05:36.249 06:43:43 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.249 06:43:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:36.249 ************************************ 00:05:36.249 END TEST env_pci 00:05:36.249 ************************************ 00:05:36.249 06:43:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:36.249 06:43:43 env -- env/env.sh@15 -- # uname 00:05:36.249 06:43:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:36.249 06:43:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:36.249 06:43:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:36.249 06:43:43 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:36.249 06:43:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.249 06:43:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.249 ************************************ 00:05:36.249 START TEST env_dpdk_post_init 00:05:36.249 ************************************ 00:05:36.249 06:43:43 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:36.249 EAL: Detected CPU lcores: 112 00:05:36.249 EAL: Detected NUMA nodes: 2 00:05:36.250 EAL: Detected static linkage of DPDK 00:05:36.250 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:36.250 EAL: Selected IOVA mode 'VA' 00:05:36.250 EAL: VFIO support initialized 00:05:36.250 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:36.250 EAL: Using IOMMU type 1 (Type 1) 00:05:37.180 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:41.360 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:41.360 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:05:41.360 Starting DPDK initialization... 00:05:41.360 Starting SPDK post initialization... 00:05:41.360 SPDK NVMe probe 00:05:41.360 Attaching to 0000:d8:00.0 00:05:41.360 Attached to 0000:d8:00.0 00:05:41.360 Cleaning up... 00:05:41.360 00:05:41.360 real 0m4.745s 00:05:41.360 user 0m3.352s 00:05:41.360 sys 0m0.638s 00:05:41.360 06:43:48 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.360 06:43:48 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:41.360 ************************************ 00:05:41.360 END TEST env_dpdk_post_init 00:05:41.360 ************************************ 00:05:41.360 06:43:48 env -- env/env.sh@26 -- # uname 00:05:41.360 06:43:48 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:41.360 06:43:48 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:41.360 06:43:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.360 06:43:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.360 06:43:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.360 ************************************ 00:05:41.360 START TEST env_mem_callbacks 00:05:41.360 ************************************ 00:05:41.360 06:43:48 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:41.360 EAL: Detected CPU lcores: 112 00:05:41.360 EAL: Detected NUMA nodes: 2 00:05:41.360 EAL: Detected static linkage of DPDK 00:05:41.360 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:41.360 EAL: Selected IOVA mode 'VA' 00:05:41.360 EAL: VFIO support initialized 00:05:41.360 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:41.360 00:05:41.360 00:05:41.360 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.360 http://cunit.sourceforge.net/ 00:05:41.360 00:05:41.360 00:05:41.360 Suite: memory 00:05:41.360 Test: test ... 00:05:41.360 register 0x200000200000 2097152 00:05:41.360 malloc 3145728 00:05:41.360 register 0x200000400000 4194304 00:05:41.360 buf 0x200000500000 len 3145728 PASSED 00:05:41.360 malloc 64 00:05:41.360 buf 0x2000004fff40 len 64 PASSED 00:05:41.360 malloc 4194304 00:05:41.360 register 0x200000800000 6291456 00:05:41.360 buf 0x200000a00000 len 4194304 PASSED 00:05:41.360 free 0x200000500000 3145728 00:05:41.360 free 0x2000004fff40 64 00:05:41.360 unregister 0x200000400000 4194304 PASSED 00:05:41.360 free 0x200000a00000 4194304 00:05:41.360 unregister 0x200000800000 6291456 PASSED 00:05:41.360 malloc 8388608 00:05:41.360 register 0x200000400000 10485760 00:05:41.360 buf 0x200000600000 len 8388608 PASSED 00:05:41.360 free 0x200000600000 8388608 00:05:41.360 unregister 0x200000400000 10485760 PASSED 00:05:41.360 passed 00:05:41.360 00:05:41.360 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.360 suites 1 1 n/a 0 0 00:05:41.360 tests 1 1 1 0 0 00:05:41.360 asserts 15 15 15 0 n/a 00:05:41.360 00:05:41.360 Elapsed time = 0.005 seconds 00:05:41.360 00:05:41.360 real 0m0.078s 00:05:41.360 user 0m0.026s 00:05:41.360 sys 0m0.052s 00:05:41.360 06:43:48 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.360 06:43:48 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:41.360 ************************************ 00:05:41.360 END TEST env_mem_callbacks 00:05:41.360 ************************************ 00:05:41.360 00:05:41.360 real 0m6.574s 00:05:41.360 user 0m4.348s 00:05:41.360 sys 0m1.482s 00:05:41.360 06:43:48 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.360 06:43:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.360 ************************************ 00:05:41.360 END TEST env 00:05:41.360 ************************************ 00:05:41.360 06:43:48 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:41.360 06:43:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.360 06:43:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.360 06:43:48 -- common/autotest_common.sh@10 -- # set +x 00:05:41.360 ************************************ 00:05:41.360 START TEST rpc 00:05:41.360 ************************************ 00:05:41.360 06:43:48 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:41.360 * Looking for test storage... 00:05:41.360 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:41.360 06:43:48 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.360 06:43:48 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.360 06:43:48 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:41.360 06:43:48 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:41.360 06:43:48 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.360 06:43:48 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.360 06:43:48 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.360 06:43:48 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.360 06:43:48 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.360 06:43:48 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.360 06:43:48 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.360 06:43:48 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.360 06:43:48 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.360 06:43:48 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.360 06:43:48 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.360 06:43:48 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:41.360 06:43:48 rpc -- scripts/common.sh@345 -- # : 1 00:05:41.360 06:43:48 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.360 06:43:48 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.360 06:43:48 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:41.360 06:43:48 rpc -- scripts/common.sh@353 -- # local d=1 00:05:41.360 06:43:48 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.360 06:43:48 rpc -- scripts/common.sh@355 -- # echo 1 00:05:41.360 06:43:48 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.360 06:43:48 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:41.360 06:43:48 rpc -- scripts/common.sh@353 -- # local d=2 00:05:41.360 06:43:48 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.361 06:43:48 rpc -- scripts/common.sh@355 -- # echo 2 00:05:41.361 06:43:48 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.361 06:43:48 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.361 06:43:48 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.361 06:43:48 rpc -- scripts/common.sh@368 -- # return 0 00:05:41.361 06:43:48 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.361 06:43:48 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:41.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.361 --rc genhtml_branch_coverage=1 00:05:41.361 --rc genhtml_function_coverage=1 00:05:41.361 --rc genhtml_legend=1 00:05:41.361 --rc geninfo_all_blocks=1 00:05:41.361 --rc geninfo_unexecuted_blocks=1 00:05:41.361 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:41.361 ' 00:05:41.361 06:43:48 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:41.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.361 --rc genhtml_branch_coverage=1 00:05:41.361 --rc genhtml_function_coverage=1 00:05:41.361 --rc genhtml_legend=1 00:05:41.361 --rc geninfo_all_blocks=1 00:05:41.361 --rc geninfo_unexecuted_blocks=1 00:05:41.361 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:41.361 ' 00:05:41.361 06:43:48 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:41.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.361 --rc genhtml_branch_coverage=1 00:05:41.361 --rc genhtml_function_coverage=1 00:05:41.361 --rc genhtml_legend=1 00:05:41.361 --rc geninfo_all_blocks=1 00:05:41.361 --rc geninfo_unexecuted_blocks=1 00:05:41.361 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:41.361 ' 00:05:41.361 06:43:48 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:41.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.361 --rc genhtml_branch_coverage=1 00:05:41.361 --rc genhtml_function_coverage=1 00:05:41.361 --rc genhtml_legend=1 00:05:41.361 --rc geninfo_all_blocks=1 00:05:41.361 --rc geninfo_unexecuted_blocks=1 00:05:41.361 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:41.361 ' 00:05:41.361 06:43:48 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:41.361 06:43:48 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1139533 00:05:41.361 06:43:48 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.361 06:43:48 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1139533 00:05:41.361 06:43:48 rpc -- common/autotest_common.sh@835 -- # '[' -z 1139533 ']' 00:05:41.361 06:43:48 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.361 06:43:48 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.361 06:43:48 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.361 06:43:48 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.361 06:43:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.361 [2024-12-12 06:43:48.825909] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:41.361 [2024-12-12 06:43:48.825976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1139533 ] 00:05:41.619 [2024-12-12 06:43:48.894462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.619 [2024-12-12 06:43:48.933770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:41.619 [2024-12-12 06:43:48.933804] app.c: 616:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1139533' to capture a snapshot of events at runtime. 00:05:41.619 [2024-12-12 06:43:48.933813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:41.619 [2024-12-12 06:43:48.933821] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:41.619 [2024-12-12 06:43:48.933828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1139533 for offline analysis/debug. 00:05:41.619 [2024-12-12 06:43:48.934392] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.877 06:43:49 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.877 06:43:49 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:41.877 06:43:49 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:41.877 06:43:49 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:41.877 06:43:49 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:41.877 06:43:49 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:41.877 06:43:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.877 06:43:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.877 06:43:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.877 ************************************ 00:05:41.877 START TEST rpc_integrity 00:05:41.877 ************************************ 00:05:41.877 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:41.877 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:41.877 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.877 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.877 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.877 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:41.877 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:41.877 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:41.877 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.877 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.877 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.877 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.877 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:41.877 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:41.877 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.877 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.877 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.877 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:41.877 { 00:05:41.877 "name": "Malloc0", 00:05:41.877 "aliases": [ 00:05:41.877 "fcbecd5c-a522-4f14-8204-6efb7535dba9" 00:05:41.877 ], 00:05:41.877 "product_name": "Malloc disk", 00:05:41.877 "block_size": 512, 00:05:41.877 "num_blocks": 16384, 00:05:41.877 "uuid": "fcbecd5c-a522-4f14-8204-6efb7535dba9", 00:05:41.877 "assigned_rate_limits": { 00:05:41.877 "rw_ios_per_sec": 0, 00:05:41.877 "rw_mbytes_per_sec": 0, 00:05:41.877 "r_mbytes_per_sec": 0, 00:05:41.877 "w_mbytes_per_sec": 0 00:05:41.877 }, 00:05:41.877 "claimed": false, 00:05:41.877 "zoned": false, 00:05:41.877 "supported_io_types": { 00:05:41.877 "read": true, 00:05:41.877 "write": true, 00:05:41.877 "unmap": true, 00:05:41.877 "flush": true, 00:05:41.877 "reset": true, 00:05:41.877 "nvme_admin": false, 00:05:41.877 "nvme_io": false, 00:05:41.877 "nvme_io_md": false, 00:05:41.877 "write_zeroes": true, 00:05:41.877 "zcopy": true, 00:05:41.877 "get_zone_info": false, 00:05:41.877 "zone_management": false, 00:05:41.877 "zone_append": false, 00:05:41.877 "compare": false, 00:05:41.878 "compare_and_write": false, 00:05:41.878 "abort": true, 00:05:41.878 "seek_hole": false, 00:05:41.878 "seek_data": false, 00:05:41.878 "copy": true, 00:05:41.878 "nvme_iov_md": false 00:05:41.878 }, 00:05:41.878 "memory_domains": [ 00:05:41.878 { 00:05:41.878 "dma_device_id": "system", 00:05:41.878 "dma_device_type": 1 00:05:41.878 }, 00:05:41.878 { 00:05:41.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.878 "dma_device_type": 2 00:05:41.878 } 00:05:41.878 ], 00:05:41.878 "driver_specific": {} 00:05:41.878 } 00:05:41.878 ]' 00:05:41.878 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:41.878 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:41.878 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:41.878 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.878 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.878 [2024-12-12 06:43:49.304756] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:41.878 [2024-12-12 06:43:49.304786] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:41.878 [2024-12-12 06:43:49.304808] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5e00d80 00:05:41.878 [2024-12-12 06:43:49.304818] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:41.878 [2024-12-12 06:43:49.305687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:41.878 [2024-12-12 06:43:49.305708] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:41.878 Passthru0 00:05:41.878 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.878 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:41.878 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.878 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.878 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.878 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:41.878 { 00:05:41.878 "name": "Malloc0", 00:05:41.878 "aliases": [ 00:05:41.878 "fcbecd5c-a522-4f14-8204-6efb7535dba9" 00:05:41.878 ], 00:05:41.878 "product_name": "Malloc disk", 00:05:41.878 "block_size": 512, 00:05:41.878 "num_blocks": 16384, 00:05:41.878 "uuid": "fcbecd5c-a522-4f14-8204-6efb7535dba9", 00:05:41.878 "assigned_rate_limits": { 00:05:41.878 "rw_ios_per_sec": 0, 00:05:41.878 "rw_mbytes_per_sec": 0, 00:05:41.878 "r_mbytes_per_sec": 0, 00:05:41.878 "w_mbytes_per_sec": 0 00:05:41.878 }, 00:05:41.878 "claimed": true, 00:05:41.878 "claim_type": "exclusive_write", 00:05:41.878 "zoned": false, 00:05:41.878 "supported_io_types": { 00:05:41.878 "read": true, 00:05:41.878 "write": true, 00:05:41.878 "unmap": true, 00:05:41.878 "flush": true, 00:05:41.878 "reset": true, 00:05:41.878 "nvme_admin": false, 00:05:41.878 "nvme_io": false, 00:05:41.878 "nvme_io_md": false, 00:05:41.878 "write_zeroes": true, 00:05:41.878 "zcopy": true, 00:05:41.878 "get_zone_info": false, 00:05:41.878 "zone_management": false, 00:05:41.878 "zone_append": false, 00:05:41.878 "compare": false, 00:05:41.878 "compare_and_write": false, 00:05:41.878 "abort": true, 00:05:41.878 "seek_hole": false, 00:05:41.878 "seek_data": false, 00:05:41.878 "copy": true, 00:05:41.878 "nvme_iov_md": false 00:05:41.878 }, 00:05:41.878 "memory_domains": [ 00:05:41.878 { 00:05:41.878 "dma_device_id": "system", 00:05:41.878 "dma_device_type": 1 00:05:41.878 }, 00:05:41.878 { 00:05:41.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.878 "dma_device_type": 2 00:05:41.878 } 00:05:41.878 ], 00:05:41.878 "driver_specific": {} 00:05:41.878 }, 00:05:41.878 { 00:05:41.878 "name": "Passthru0", 00:05:41.878 "aliases": [ 00:05:41.878 "bd6460b7-bbb1-55d7-8187-11b47381249d" 00:05:41.878 ], 00:05:41.878 "product_name": "passthru", 00:05:41.878 "block_size": 512, 00:05:41.878 "num_blocks": 16384, 00:05:41.878 "uuid": "bd6460b7-bbb1-55d7-8187-11b47381249d", 00:05:41.878 "assigned_rate_limits": { 00:05:41.878 "rw_ios_per_sec": 0, 00:05:41.878 "rw_mbytes_per_sec": 0, 00:05:41.878 "r_mbytes_per_sec": 0, 00:05:41.878 "w_mbytes_per_sec": 0 00:05:41.878 }, 00:05:41.878 "claimed": false, 00:05:41.878 "zoned": false, 00:05:41.878 "supported_io_types": { 00:05:41.878 "read": true, 00:05:41.878 "write": true, 00:05:41.878 "unmap": true, 00:05:41.878 "flush": true, 00:05:41.878 "reset": true, 00:05:41.878 "nvme_admin": false, 00:05:41.878 "nvme_io": false, 00:05:41.878 "nvme_io_md": false, 00:05:41.878 "write_zeroes": true, 00:05:41.878 "zcopy": true, 00:05:41.878 "get_zone_info": false, 00:05:41.878 "zone_management": false, 00:05:41.878 "zone_append": false, 00:05:41.878 "compare": false, 00:05:41.878 "compare_and_write": false, 00:05:41.878 "abort": true, 00:05:41.878 "seek_hole": false, 00:05:41.878 "seek_data": false, 00:05:41.878 "copy": true, 00:05:41.878 "nvme_iov_md": false 00:05:41.878 }, 00:05:41.878 "memory_domains": [ 00:05:41.878 { 00:05:41.878 "dma_device_id": "system", 00:05:41.878 "dma_device_type": 1 00:05:41.878 }, 00:05:41.878 { 00:05:41.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.878 "dma_device_type": 2 00:05:41.878 } 00:05:41.878 ], 00:05:41.878 "driver_specific": { 00:05:41.878 "passthru": { 00:05:41.878 "name": "Passthru0", 00:05:41.878 "base_bdev_name": "Malloc0" 00:05:41.878 } 00:05:41.878 } 00:05:41.878 } 00:05:41.878 ]' 00:05:41.878 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:41.878 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:41.878 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:41.878 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.878 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.878 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.878 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:41.878 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.878 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.878 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.878 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:41.878 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.878 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.136 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.136 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.136 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:42.136 06:43:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.136 00:05:42.136 real 0m0.254s 00:05:42.136 user 0m0.142s 00:05:42.136 sys 0m0.048s 00:05:42.136 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.136 06:43:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.136 ************************************ 00:05:42.136 END TEST rpc_integrity 00:05:42.136 ************************************ 00:05:42.136 06:43:49 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:42.136 06:43:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.136 06:43:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.136 06:43:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.136 ************************************ 00:05:42.136 START TEST rpc_plugins 00:05:42.136 ************************************ 00:05:42.136 06:43:49 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:42.136 06:43:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:42.136 06:43:49 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.136 06:43:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.136 06:43:49 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.136 06:43:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:42.136 06:43:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:42.136 06:43:49 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.136 06:43:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.136 06:43:49 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.136 06:43:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:42.136 { 00:05:42.136 "name": "Malloc1", 00:05:42.136 "aliases": [ 00:05:42.136 "90c05a4f-bce0-4014-a1e6-8549b1a3d7ed" 00:05:42.136 ], 00:05:42.136 "product_name": "Malloc disk", 00:05:42.136 "block_size": 4096, 00:05:42.136 "num_blocks": 256, 00:05:42.136 "uuid": "90c05a4f-bce0-4014-a1e6-8549b1a3d7ed", 00:05:42.136 "assigned_rate_limits": { 00:05:42.136 "rw_ios_per_sec": 0, 00:05:42.136 "rw_mbytes_per_sec": 0, 00:05:42.136 "r_mbytes_per_sec": 0, 00:05:42.136 "w_mbytes_per_sec": 0 00:05:42.136 }, 00:05:42.136 "claimed": false, 00:05:42.136 "zoned": false, 00:05:42.136 "supported_io_types": { 00:05:42.136 "read": true, 00:05:42.136 "write": true, 00:05:42.136 "unmap": true, 00:05:42.136 "flush": true, 00:05:42.136 "reset": true, 00:05:42.136 "nvme_admin": false, 00:05:42.136 "nvme_io": false, 00:05:42.136 "nvme_io_md": false, 00:05:42.136 "write_zeroes": true, 00:05:42.136 "zcopy": true, 00:05:42.136 "get_zone_info": false, 00:05:42.136 "zone_management": false, 00:05:42.136 "zone_append": false, 00:05:42.136 "compare": false, 00:05:42.136 "compare_and_write": false, 00:05:42.136 "abort": true, 00:05:42.137 "seek_hole": false, 00:05:42.137 "seek_data": false, 00:05:42.137 "copy": true, 00:05:42.137 "nvme_iov_md": false 00:05:42.137 }, 00:05:42.137 "memory_domains": [ 00:05:42.137 { 00:05:42.137 "dma_device_id": "system", 00:05:42.137 "dma_device_type": 1 00:05:42.137 }, 00:05:42.137 { 00:05:42.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.137 "dma_device_type": 2 00:05:42.137 } 00:05:42.137 ], 00:05:42.137 "driver_specific": {} 00:05:42.137 } 00:05:42.137 ]' 00:05:42.137 06:43:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:42.137 06:43:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:42.137 06:43:49 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:42.137 06:43:49 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.137 06:43:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.137 06:43:49 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.137 06:43:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:42.137 06:43:49 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.137 06:43:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.137 06:43:49 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.137 06:43:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:42.137 06:43:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:42.137 06:43:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:42.137 00:05:42.137 real 0m0.136s 00:05:42.137 user 0m0.078s 00:05:42.137 sys 0m0.024s 00:05:42.137 06:43:49 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.137 06:43:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.137 ************************************ 00:05:42.137 END TEST rpc_plugins 00:05:42.137 ************************************ 00:05:42.395 06:43:49 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:42.395 06:43:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.395 06:43:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.395 06:43:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.395 ************************************ 00:05:42.395 START TEST rpc_trace_cmd_test 00:05:42.395 ************************************ 00:05:42.395 06:43:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:42.395 06:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:42.395 06:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:42.395 06:43:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.395 06:43:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.395 06:43:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.395 06:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:42.395 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1139533", 00:05:42.395 "tpoint_group_mask": "0x8", 00:05:42.395 "iscsi_conn": { 00:05:42.395 "mask": "0x2", 00:05:42.395 "tpoint_mask": "0x0" 00:05:42.395 }, 00:05:42.395 "scsi": { 00:05:42.395 "mask": "0x4", 00:05:42.395 "tpoint_mask": "0x0" 00:05:42.395 }, 00:05:42.395 "bdev": { 00:05:42.395 "mask": "0x8", 00:05:42.395 "tpoint_mask": "0xffffffffffffffff" 00:05:42.395 }, 00:05:42.395 "nvmf_rdma": { 00:05:42.395 "mask": "0x10", 00:05:42.395 "tpoint_mask": "0x0" 00:05:42.395 }, 00:05:42.395 "nvmf_tcp": { 00:05:42.395 "mask": "0x20", 00:05:42.395 "tpoint_mask": "0x0" 00:05:42.395 }, 00:05:42.395 "ftl": { 00:05:42.395 "mask": "0x40", 00:05:42.395 "tpoint_mask": "0x0" 00:05:42.395 }, 00:05:42.395 "blobfs": { 00:05:42.395 "mask": "0x80", 00:05:42.395 "tpoint_mask": "0x0" 00:05:42.395 }, 00:05:42.395 "dsa": { 00:05:42.395 "mask": "0x200", 00:05:42.395 "tpoint_mask": "0x0" 00:05:42.395 }, 00:05:42.395 "thread": { 00:05:42.395 "mask": "0x400", 00:05:42.395 "tpoint_mask": "0x0" 00:05:42.395 }, 00:05:42.395 "nvme_pcie": { 00:05:42.395 "mask": "0x800", 00:05:42.395 "tpoint_mask": "0x0" 00:05:42.395 }, 00:05:42.395 "iaa": { 00:05:42.395 "mask": "0x1000", 00:05:42.395 "tpoint_mask": "0x0" 00:05:42.395 }, 00:05:42.395 "nvme_tcp": { 00:05:42.395 "mask": "0x2000", 00:05:42.395 "tpoint_mask": "0x0" 00:05:42.395 }, 00:05:42.395 "bdev_nvme": { 00:05:42.395 "mask": "0x4000", 00:05:42.395 "tpoint_mask": "0x0" 00:05:42.395 }, 00:05:42.395 "sock": { 00:05:42.395 "mask": "0x8000", 00:05:42.395 "tpoint_mask": "0x0" 00:05:42.395 }, 00:05:42.395 "blob": { 00:05:42.395 "mask": "0x10000", 00:05:42.395 "tpoint_mask": "0x0" 00:05:42.395 }, 00:05:42.395 "bdev_raid": { 00:05:42.395 "mask": "0x20000", 00:05:42.395 "tpoint_mask": "0x0" 00:05:42.395 }, 00:05:42.395 "scheduler": { 00:05:42.395 "mask": "0x40000", 00:05:42.395 "tpoint_mask": "0x0" 00:05:42.395 } 00:05:42.395 }' 00:05:42.395 06:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:42.395 06:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:42.395 06:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:42.395 06:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:42.395 06:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:42.395 06:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:42.395 06:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:42.395 06:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:42.395 06:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:42.654 06:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:42.654 00:05:42.654 real 0m0.189s 00:05:42.654 user 0m0.152s 00:05:42.654 sys 0m0.028s 00:05:42.654 06:43:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.654 06:43:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.654 ************************************ 00:05:42.654 END TEST rpc_trace_cmd_test 00:05:42.654 ************************************ 00:05:42.654 06:43:49 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:42.654 06:43:49 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:42.654 06:43:49 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:42.654 06:43:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.654 06:43:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.654 06:43:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.654 ************************************ 00:05:42.654 START TEST rpc_daemon_integrity 00:05:42.654 ************************************ 00:05:42.654 06:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.654 { 00:05:42.654 "name": "Malloc2", 00:05:42.654 "aliases": [ 00:05:42.654 "0de30b9e-22ea-4917-b80f-619f2ec6be07" 00:05:42.654 ], 00:05:42.654 "product_name": "Malloc disk", 00:05:42.654 "block_size": 512, 00:05:42.654 "num_blocks": 16384, 00:05:42.654 "uuid": "0de30b9e-22ea-4917-b80f-619f2ec6be07", 00:05:42.654 "assigned_rate_limits": { 00:05:42.654 "rw_ios_per_sec": 0, 00:05:42.654 "rw_mbytes_per_sec": 0, 00:05:42.654 "r_mbytes_per_sec": 0, 00:05:42.654 "w_mbytes_per_sec": 0 00:05:42.654 }, 00:05:42.654 "claimed": false, 00:05:42.654 "zoned": false, 00:05:42.654 "supported_io_types": { 00:05:42.654 "read": true, 00:05:42.654 "write": true, 00:05:42.654 "unmap": true, 00:05:42.654 "flush": true, 00:05:42.654 "reset": true, 00:05:42.654 "nvme_admin": false, 00:05:42.654 "nvme_io": false, 00:05:42.654 "nvme_io_md": false, 00:05:42.654 "write_zeroes": true, 00:05:42.654 "zcopy": true, 00:05:42.654 "get_zone_info": false, 00:05:42.654 "zone_management": false, 00:05:42.654 "zone_append": false, 00:05:42.654 "compare": false, 00:05:42.654 "compare_and_write": false, 00:05:42.654 "abort": true, 00:05:42.654 "seek_hole": false, 00:05:42.654 "seek_data": false, 00:05:42.654 "copy": true, 00:05:42.654 "nvme_iov_md": false 00:05:42.654 }, 00:05:42.654 "memory_domains": [ 00:05:42.654 { 00:05:42.654 "dma_device_id": "system", 00:05:42.654 "dma_device_type": 1 00:05:42.654 }, 00:05:42.654 { 00:05:42.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.654 "dma_device_type": 2 00:05:42.654 } 00:05:42.654 ], 00:05:42.654 "driver_specific": {} 00:05:42.654 } 00:05:42.654 ]' 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.654 [2024-12-12 06:43:50.138897] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:42.654 [2024-12-12 06:43:50.138932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.654 [2024-12-12 06:43:50.138968] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5cd37f0 00:05:42.654 [2024-12-12 06:43:50.138979] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.654 [2024-12-12 06:43:50.139798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.654 [2024-12-12 06:43:50.139820] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.654 Passthru0 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.654 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.654 { 00:05:42.654 "name": "Malloc2", 00:05:42.654 "aliases": [ 00:05:42.654 "0de30b9e-22ea-4917-b80f-619f2ec6be07" 00:05:42.654 ], 00:05:42.654 "product_name": "Malloc disk", 00:05:42.654 "block_size": 512, 00:05:42.654 "num_blocks": 16384, 00:05:42.654 "uuid": "0de30b9e-22ea-4917-b80f-619f2ec6be07", 00:05:42.654 "assigned_rate_limits": { 00:05:42.654 "rw_ios_per_sec": 0, 00:05:42.654 "rw_mbytes_per_sec": 0, 00:05:42.654 "r_mbytes_per_sec": 0, 00:05:42.654 "w_mbytes_per_sec": 0 00:05:42.654 }, 00:05:42.654 "claimed": true, 00:05:42.654 "claim_type": "exclusive_write", 00:05:42.654 "zoned": false, 00:05:42.654 "supported_io_types": { 00:05:42.654 "read": true, 00:05:42.654 "write": true, 00:05:42.654 "unmap": true, 00:05:42.654 "flush": true, 00:05:42.654 "reset": true, 00:05:42.654 "nvme_admin": false, 00:05:42.654 "nvme_io": false, 00:05:42.654 "nvme_io_md": false, 00:05:42.654 "write_zeroes": true, 00:05:42.654 "zcopy": true, 00:05:42.654 "get_zone_info": false, 00:05:42.654 "zone_management": false, 00:05:42.654 "zone_append": false, 00:05:42.654 "compare": false, 00:05:42.654 "compare_and_write": false, 00:05:42.654 "abort": true, 00:05:42.654 "seek_hole": false, 00:05:42.654 "seek_data": false, 00:05:42.654 "copy": true, 00:05:42.654 "nvme_iov_md": false 00:05:42.654 }, 00:05:42.654 "memory_domains": [ 00:05:42.654 { 00:05:42.654 "dma_device_id": "system", 00:05:42.654 "dma_device_type": 1 00:05:42.654 }, 00:05:42.654 { 00:05:42.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.654 "dma_device_type": 2 00:05:42.654 } 00:05:42.654 ], 00:05:42.654 "driver_specific": {} 00:05:42.654 }, 00:05:42.654 { 00:05:42.654 "name": "Passthru0", 00:05:42.654 "aliases": [ 00:05:42.654 "09b27f0e-0c88-59b5-a0c5-9f9c07444604" 00:05:42.654 ], 00:05:42.654 "product_name": "passthru", 00:05:42.654 "block_size": 512, 00:05:42.654 "num_blocks": 16384, 00:05:42.654 "uuid": "09b27f0e-0c88-59b5-a0c5-9f9c07444604", 00:05:42.654 "assigned_rate_limits": { 00:05:42.654 "rw_ios_per_sec": 0, 00:05:42.654 "rw_mbytes_per_sec": 0, 00:05:42.654 "r_mbytes_per_sec": 0, 00:05:42.654 "w_mbytes_per_sec": 0 00:05:42.654 }, 00:05:42.654 "claimed": false, 00:05:42.654 "zoned": false, 00:05:42.654 "supported_io_types": { 00:05:42.654 "read": true, 00:05:42.654 "write": true, 00:05:42.654 "unmap": true, 00:05:42.654 "flush": true, 00:05:42.654 "reset": true, 00:05:42.654 "nvme_admin": false, 00:05:42.654 "nvme_io": false, 00:05:42.654 "nvme_io_md": false, 00:05:42.654 "write_zeroes": true, 00:05:42.655 "zcopy": true, 00:05:42.655 "get_zone_info": false, 00:05:42.655 "zone_management": false, 00:05:42.655 "zone_append": false, 00:05:42.655 "compare": false, 00:05:42.655 "compare_and_write": false, 00:05:42.655 "abort": true, 00:05:42.655 "seek_hole": false, 00:05:42.655 "seek_data": false, 00:05:42.655 "copy": true, 00:05:42.655 "nvme_iov_md": false 00:05:42.655 }, 00:05:42.655 "memory_domains": [ 00:05:42.655 { 00:05:42.655 "dma_device_id": "system", 00:05:42.655 "dma_device_type": 1 00:05:42.655 }, 00:05:42.655 { 00:05:42.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.655 "dma_device_type": 2 00:05:42.655 } 00:05:42.655 ], 00:05:42.655 "driver_specific": { 00:05:42.655 "passthru": { 00:05:42.655 "name": "Passthru0", 00:05:42.655 "base_bdev_name": "Malloc2" 00:05:42.655 } 00:05:42.655 } 00:05:42.655 } 00:05:42.655 ]' 00:05:42.655 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:42.913 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.913 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.913 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.913 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.913 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.913 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:42.913 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.913 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.913 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.913 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.913 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.913 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.913 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.913 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.913 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:42.913 06:43:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.913 00:05:42.913 real 0m0.288s 00:05:42.913 user 0m0.192s 00:05:42.913 sys 0m0.041s 00:05:42.913 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.913 06:43:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.913 ************************************ 00:05:42.913 END TEST rpc_daemon_integrity 00:05:42.913 ************************************ 00:05:42.913 06:43:50 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:42.913 06:43:50 rpc -- rpc/rpc.sh@84 -- # killprocess 1139533 00:05:42.913 06:43:50 rpc -- common/autotest_common.sh@954 -- # '[' -z 1139533 ']' 00:05:42.913 06:43:50 rpc -- common/autotest_common.sh@958 -- # kill -0 1139533 00:05:42.913 06:43:50 rpc -- common/autotest_common.sh@959 -- # uname 00:05:42.913 06:43:50 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.913 06:43:50 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1139533 00:05:42.913 06:43:50 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.913 06:43:50 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.913 06:43:50 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1139533' 00:05:42.913 killing process with pid 1139533 00:05:42.913 06:43:50 rpc -- common/autotest_common.sh@973 -- # kill 1139533 00:05:42.913 06:43:50 rpc -- common/autotest_common.sh@978 -- # wait 1139533 00:05:43.172 00:05:43.172 real 0m2.054s 00:05:43.172 user 0m2.577s 00:05:43.172 sys 0m0.772s 00:05:43.430 06:43:50 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.430 06:43:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.430 ************************************ 00:05:43.430 END TEST rpc 00:05:43.430 ************************************ 00:05:43.430 06:43:50 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:43.430 06:43:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.430 06:43:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.430 06:43:50 -- common/autotest_common.sh@10 -- # set +x 00:05:43.430 ************************************ 00:05:43.430 START TEST skip_rpc 00:05:43.430 ************************************ 00:05:43.430 06:43:50 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:43.430 * Looking for test storage... 00:05:43.430 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:43.430 06:43:50 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:43.430 06:43:50 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:43.430 06:43:50 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:43.430 06:43:50 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:43.430 06:43:50 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.430 06:43:50 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.430 06:43:50 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.430 06:43:50 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.430 06:43:50 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.430 06:43:50 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.430 06:43:50 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.430 06:43:50 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.430 06:43:50 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.430 06:43:50 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.430 06:43:50 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.430 06:43:50 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:43.430 06:43:50 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:43.430 06:43:50 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.430 06:43:50 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.430 06:43:50 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:43.688 06:43:50 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:43.688 06:43:50 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.688 06:43:50 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:43.688 06:43:50 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.688 06:43:50 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:43.688 06:43:50 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:43.688 06:43:50 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.688 06:43:50 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:43.688 06:43:50 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.688 06:43:50 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.688 06:43:50 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.688 06:43:50 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:43.688 06:43:50 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.688 06:43:50 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:43.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.688 --rc genhtml_branch_coverage=1 00:05:43.688 --rc genhtml_function_coverage=1 00:05:43.688 --rc genhtml_legend=1 00:05:43.688 --rc geninfo_all_blocks=1 00:05:43.688 --rc geninfo_unexecuted_blocks=1 00:05:43.688 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:43.688 ' 00:05:43.688 06:43:50 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:43.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.688 --rc genhtml_branch_coverage=1 00:05:43.688 --rc genhtml_function_coverage=1 00:05:43.688 --rc genhtml_legend=1 00:05:43.688 --rc geninfo_all_blocks=1 00:05:43.688 --rc geninfo_unexecuted_blocks=1 00:05:43.688 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:43.688 ' 00:05:43.688 06:43:50 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:43.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.688 --rc genhtml_branch_coverage=1 00:05:43.688 --rc genhtml_function_coverage=1 00:05:43.688 --rc genhtml_legend=1 00:05:43.688 --rc geninfo_all_blocks=1 00:05:43.688 --rc geninfo_unexecuted_blocks=1 00:05:43.688 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:43.688 ' 00:05:43.688 06:43:50 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:43.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.688 --rc genhtml_branch_coverage=1 00:05:43.688 --rc genhtml_function_coverage=1 00:05:43.688 --rc genhtml_legend=1 00:05:43.688 --rc geninfo_all_blocks=1 00:05:43.688 --rc geninfo_unexecuted_blocks=1 00:05:43.688 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:43.688 ' 00:05:43.688 06:43:50 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:43.688 06:43:50 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:43.688 06:43:50 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:43.688 06:43:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.688 06:43:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.688 06:43:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.688 ************************************ 00:05:43.688 START TEST skip_rpc 00:05:43.688 ************************************ 00:05:43.688 06:43:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:43.688 06:43:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1139999 00:05:43.688 06:43:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.688 06:43:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:43.688 06:43:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:43.688 [2024-12-12 06:43:51.033068] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:43.688 [2024-12-12 06:43:51.033146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1139999 ] 00:05:43.688 [2024-12-12 06:43:51.103709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.688 [2024-12-12 06:43:51.142631] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1139999 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1139999 ']' 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1139999 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.949 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1139999 00:05:48.950 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.950 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.950 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1139999' 00:05:48.950 killing process with pid 1139999 00:05:48.950 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1139999 00:05:48.950 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1139999 00:05:48.950 00:05:48.950 real 0m5.374s 00:05:48.950 user 0m5.138s 00:05:48.950 sys 0m0.291s 00:05:48.950 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.950 06:43:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.950 ************************************ 00:05:48.950 END TEST skip_rpc 00:05:48.950 ************************************ 00:05:48.950 06:43:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:48.950 06:43:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.950 06:43:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.950 06:43:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.950 ************************************ 00:05:48.950 START TEST skip_rpc_with_json 00:05:48.950 ************************************ 00:05:48.950 06:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:48.950 06:43:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:48.950 06:43:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1141081 00:05:48.950 06:43:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.950 06:43:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.950 06:43:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1141081 00:05:48.950 06:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1141081 ']' 00:05:48.950 06:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.950 06:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.950 06:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.950 06:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.950 06:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.207 [2024-12-12 06:43:56.492002] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:49.207 [2024-12-12 06:43:56.492086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1141081 ] 00:05:49.207 [2024-12-12 06:43:56.562289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.207 [2024-12-12 06:43:56.600669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.464 06:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.464 06:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:49.464 06:43:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:49.464 06:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.464 06:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.464 [2024-12-12 06:43:56.812674] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:49.464 request: 00:05:49.464 { 00:05:49.464 "trtype": "tcp", 00:05:49.464 "method": "nvmf_get_transports", 00:05:49.464 "req_id": 1 00:05:49.464 } 00:05:49.464 Got JSON-RPC error response 00:05:49.464 response: 00:05:49.464 { 00:05:49.464 "code": -19, 00:05:49.464 "message": "No such device" 00:05:49.464 } 00:05:49.464 06:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:49.464 06:43:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:49.464 06:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.464 06:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.464 [2024-12-12 06:43:56.824774] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.464 06:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.464 06:43:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:49.464 06:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.464 06:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.721 06:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.721 06:43:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:49.721 { 00:05:49.721 "subsystems": [ 00:05:49.721 { 00:05:49.721 "subsystem": "scheduler", 00:05:49.721 "config": [ 00:05:49.721 { 00:05:49.721 "method": "framework_set_scheduler", 00:05:49.721 "params": { 00:05:49.721 "name": "static" 00:05:49.721 } 00:05:49.721 } 00:05:49.721 ] 00:05:49.721 }, 00:05:49.721 { 00:05:49.721 "subsystem": "vmd", 00:05:49.721 "config": [] 00:05:49.721 }, 00:05:49.721 { 00:05:49.721 "subsystem": "sock", 00:05:49.721 "config": [ 00:05:49.721 { 00:05:49.721 "method": "sock_set_default_impl", 00:05:49.721 "params": { 00:05:49.721 "impl_name": "posix" 00:05:49.721 } 00:05:49.721 }, 00:05:49.721 { 00:05:49.721 "method": "sock_impl_set_options", 00:05:49.721 "params": { 00:05:49.721 "impl_name": "ssl", 00:05:49.721 "recv_buf_size": 4096, 00:05:49.721 "send_buf_size": 4096, 00:05:49.721 "enable_recv_pipe": true, 00:05:49.721 "enable_quickack": false, 00:05:49.721 "enable_placement_id": 0, 00:05:49.721 "enable_zerocopy_send_server": true, 00:05:49.721 "enable_zerocopy_send_client": false, 00:05:49.721 "zerocopy_threshold": 0, 00:05:49.721 "tls_version": 0, 00:05:49.721 "enable_ktls": false 00:05:49.721 } 00:05:49.721 }, 00:05:49.721 { 00:05:49.721 "method": "sock_impl_set_options", 00:05:49.721 "params": { 00:05:49.721 "impl_name": "posix", 00:05:49.721 "recv_buf_size": 2097152, 00:05:49.721 "send_buf_size": 2097152, 00:05:49.721 "enable_recv_pipe": true, 00:05:49.721 "enable_quickack": false, 00:05:49.721 "enable_placement_id": 0, 00:05:49.721 "enable_zerocopy_send_server": true, 00:05:49.721 "enable_zerocopy_send_client": false, 00:05:49.721 "zerocopy_threshold": 0, 00:05:49.721 "tls_version": 0, 00:05:49.721 "enable_ktls": false 00:05:49.721 } 00:05:49.721 } 00:05:49.721 ] 00:05:49.721 }, 00:05:49.721 { 00:05:49.721 "subsystem": "iobuf", 00:05:49.721 "config": [ 00:05:49.721 { 00:05:49.721 "method": "iobuf_set_options", 00:05:49.721 "params": { 00:05:49.721 "small_pool_count": 8192, 00:05:49.721 "large_pool_count": 1024, 00:05:49.721 "small_bufsize": 8192, 00:05:49.721 "large_bufsize": 135168, 00:05:49.721 "enable_numa": false 00:05:49.721 } 00:05:49.721 } 00:05:49.721 ] 00:05:49.721 }, 00:05:49.721 { 00:05:49.721 "subsystem": "keyring", 00:05:49.721 "config": [] 00:05:49.721 }, 00:05:49.721 { 00:05:49.721 "subsystem": "vfio_user_target", 00:05:49.721 "config": null 00:05:49.721 }, 00:05:49.721 { 00:05:49.721 "subsystem": "fsdev", 00:05:49.721 "config": [ 00:05:49.721 { 00:05:49.721 "method": "fsdev_set_opts", 00:05:49.721 "params": { 00:05:49.721 "fsdev_io_pool_size": 65535, 00:05:49.721 "fsdev_io_cache_size": 256 00:05:49.721 } 00:05:49.721 } 00:05:49.721 ] 00:05:49.721 }, 00:05:49.721 { 00:05:49.721 "subsystem": "accel", 00:05:49.721 "config": [ 00:05:49.721 { 00:05:49.721 "method": "accel_set_options", 00:05:49.721 "params": { 00:05:49.721 "small_cache_size": 128, 00:05:49.721 "large_cache_size": 16, 00:05:49.721 "task_count": 2048, 00:05:49.721 "sequence_count": 2048, 00:05:49.721 "buf_count": 2048 00:05:49.721 } 00:05:49.721 } 00:05:49.721 ] 00:05:49.721 }, 00:05:49.721 { 00:05:49.722 "subsystem": "bdev", 00:05:49.722 "config": [ 00:05:49.722 { 00:05:49.722 "method": "bdev_set_options", 00:05:49.722 "params": { 00:05:49.722 "bdev_io_pool_size": 65535, 00:05:49.722 "bdev_io_cache_size": 256, 00:05:49.722 "bdev_auto_examine": true, 00:05:49.722 "iobuf_small_cache_size": 128, 00:05:49.722 "iobuf_large_cache_size": 16 00:05:49.722 } 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "method": "bdev_raid_set_options", 00:05:49.722 "params": { 00:05:49.722 "process_window_size_kb": 1024, 00:05:49.722 "process_max_bandwidth_mb_sec": 0 00:05:49.722 } 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "method": "bdev_nvme_set_options", 00:05:49.722 "params": { 00:05:49.722 "action_on_timeout": "none", 00:05:49.722 "timeout_us": 0, 00:05:49.722 "timeout_admin_us": 0, 00:05:49.722 "keep_alive_timeout_ms": 10000, 00:05:49.722 "arbitration_burst": 0, 00:05:49.722 "low_priority_weight": 0, 00:05:49.722 "medium_priority_weight": 0, 00:05:49.722 "high_priority_weight": 0, 00:05:49.722 "nvme_adminq_poll_period_us": 10000, 00:05:49.722 "nvme_ioq_poll_period_us": 0, 00:05:49.722 "io_queue_requests": 0, 00:05:49.722 "delay_cmd_submit": true, 00:05:49.722 "transport_retry_count": 4, 00:05:49.722 "bdev_retry_count": 3, 00:05:49.722 "transport_ack_timeout": 0, 00:05:49.722 "ctrlr_loss_timeout_sec": 0, 00:05:49.722 "reconnect_delay_sec": 0, 00:05:49.722 "fast_io_fail_timeout_sec": 0, 00:05:49.722 "disable_auto_failback": false, 00:05:49.722 "generate_uuids": false, 00:05:49.722 "transport_tos": 0, 00:05:49.722 "nvme_error_stat": false, 00:05:49.722 "rdma_srq_size": 0, 00:05:49.722 "io_path_stat": false, 00:05:49.722 "allow_accel_sequence": false, 00:05:49.722 "rdma_max_cq_size": 0, 00:05:49.722 "rdma_cm_event_timeout_ms": 0, 00:05:49.722 "dhchap_digests": [ 00:05:49.722 "sha256", 00:05:49.722 "sha384", 00:05:49.722 "sha512" 00:05:49.722 ], 00:05:49.722 "dhchap_dhgroups": [ 00:05:49.722 "null", 00:05:49.722 "ffdhe2048", 00:05:49.722 "ffdhe3072", 00:05:49.722 "ffdhe4096", 00:05:49.722 "ffdhe6144", 00:05:49.722 "ffdhe8192" 00:05:49.722 ], 00:05:49.722 "rdma_umr_per_io": false 00:05:49.722 } 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "method": "bdev_nvme_set_hotplug", 00:05:49.722 "params": { 00:05:49.722 "period_us": 100000, 00:05:49.722 "enable": false 00:05:49.722 } 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "method": "bdev_iscsi_set_options", 00:05:49.722 "params": { 00:05:49.722 "timeout_sec": 30 00:05:49.722 } 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "method": "bdev_wait_for_examine" 00:05:49.722 } 00:05:49.722 ] 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "nvmf", 00:05:49.722 "config": [ 00:05:49.722 { 00:05:49.722 "method": "nvmf_set_config", 00:05:49.722 "params": { 00:05:49.722 "discovery_filter": "match_any", 00:05:49.722 "admin_cmd_passthru": { 00:05:49.722 "identify_ctrlr": false 00:05:49.722 }, 00:05:49.722 "dhchap_digests": [ 00:05:49.722 "sha256", 00:05:49.722 "sha384", 00:05:49.722 "sha512" 00:05:49.722 ], 00:05:49.722 "dhchap_dhgroups": [ 00:05:49.722 "null", 00:05:49.722 "ffdhe2048", 00:05:49.722 "ffdhe3072", 00:05:49.722 "ffdhe4096", 00:05:49.722 "ffdhe6144", 00:05:49.722 "ffdhe8192" 00:05:49.722 ] 00:05:49.722 } 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "method": "nvmf_set_max_subsystems", 00:05:49.722 "params": { 00:05:49.722 "max_subsystems": 1024 00:05:49.722 } 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "method": "nvmf_set_crdt", 00:05:49.722 "params": { 00:05:49.722 "crdt1": 0, 00:05:49.722 "crdt2": 0, 00:05:49.722 "crdt3": 0 00:05:49.722 } 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "method": "nvmf_create_transport", 00:05:49.722 "params": { 00:05:49.722 "trtype": "TCP", 00:05:49.722 "max_queue_depth": 128, 00:05:49.722 "max_io_qpairs_per_ctrlr": 127, 00:05:49.722 "in_capsule_data_size": 4096, 00:05:49.722 "max_io_size": 131072, 00:05:49.722 "io_unit_size": 131072, 00:05:49.722 "max_aq_depth": 128, 00:05:49.722 "num_shared_buffers": 511, 00:05:49.722 "buf_cache_size": 4294967295, 00:05:49.722 "dif_insert_or_strip": false, 00:05:49.722 "zcopy": false, 00:05:49.722 "c2h_success": true, 00:05:49.722 "sock_priority": 0, 00:05:49.722 "abort_timeout_sec": 1, 00:05:49.722 "ack_timeout": 0, 00:05:49.722 "data_wr_pool_size": 0 00:05:49.722 } 00:05:49.722 } 00:05:49.722 ] 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "nbd", 00:05:49.722 "config": [] 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "ublk", 00:05:49.722 "config": [] 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "vhost_blk", 00:05:49.722 "config": [] 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "scsi", 00:05:49.722 "config": null 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "iscsi", 00:05:49.722 "config": [ 00:05:49.722 { 00:05:49.722 "method": "iscsi_set_options", 00:05:49.722 "params": { 00:05:49.722 "node_base": "iqn.2016-06.io.spdk", 00:05:49.722 "max_sessions": 128, 00:05:49.722 "max_connections_per_session": 2, 00:05:49.722 "max_queue_depth": 64, 00:05:49.722 "default_time2wait": 2, 00:05:49.722 "default_time2retain": 20, 00:05:49.722 "first_burst_length": 8192, 00:05:49.722 "immediate_data": true, 00:05:49.722 "allow_duplicated_isid": false, 00:05:49.722 "error_recovery_level": 0, 00:05:49.722 "nop_timeout": 60, 00:05:49.722 "nop_in_interval": 30, 00:05:49.722 "disable_chap": false, 00:05:49.722 "require_chap": false, 00:05:49.722 "mutual_chap": false, 00:05:49.722 "chap_group": 0, 00:05:49.722 "max_large_datain_per_connection": 64, 00:05:49.722 "max_r2t_per_connection": 4, 00:05:49.722 "pdu_pool_size": 36864, 00:05:49.722 "immediate_data_pool_size": 16384, 00:05:49.722 "data_out_pool_size": 2048 00:05:49.722 } 00:05:49.722 } 00:05:49.722 ] 00:05:49.722 }, 00:05:49.722 { 00:05:49.722 "subsystem": "vhost_scsi", 00:05:49.722 "config": [] 00:05:49.722 } 00:05:49.722 ] 00:05:49.722 } 00:05:49.722 06:43:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:49.722 06:43:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1141081 00:05:49.722 06:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1141081 ']' 00:05:49.722 06:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1141081 00:05:49.722 06:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:49.722 06:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.722 06:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1141081 00:05:49.722 06:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.722 06:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.722 06:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1141081' 00:05:49.722 killing process with pid 1141081 00:05:49.722 06:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1141081 00:05:49.722 06:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1141081 00:05:49.979 06:43:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1141098 00:05:49.979 06:43:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:49.979 06:43:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:55.232 06:44:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1141098 00:05:55.232 06:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1141098 ']' 00:05:55.232 06:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1141098 00:05:55.232 06:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:55.232 06:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.232 06:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1141098 00:05:55.232 06:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.232 06:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.232 06:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1141098' 00:05:55.232 killing process with pid 1141098 00:05:55.232 06:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1141098 00:05:55.232 06:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1141098 00:05:55.232 06:44:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:55.232 06:44:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:55.232 00:05:55.232 real 0m6.280s 00:05:55.232 user 0m5.975s 00:05:55.232 sys 0m0.639s 00:05:55.232 06:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.232 06:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.232 ************************************ 00:05:55.232 END TEST skip_rpc_with_json 00:05:55.232 ************************************ 00:05:55.491 06:44:02 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:55.491 06:44:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.491 06:44:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.491 06:44:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.491 ************************************ 00:05:55.491 START TEST skip_rpc_with_delay 00:05:55.491 ************************************ 00:05:55.491 06:44:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:55.491 06:44:02 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.491 06:44:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:55.491 06:44:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.491 06:44:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.491 06:44:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.491 06:44:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.491 06:44:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.491 06:44:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.491 06:44:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.491 06:44:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.491 06:44:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:55.491 06:44:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.491 [2024-12-12 06:44:02.860147] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:55.491 06:44:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:55.491 06:44:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.491 06:44:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:55.491 06:44:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.491 00:05:55.491 real 0m0.046s 00:05:55.491 user 0m0.017s 00:05:55.491 sys 0m0.029s 00:05:55.491 06:44:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.491 06:44:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:55.491 ************************************ 00:05:55.491 END TEST skip_rpc_with_delay 00:05:55.491 ************************************ 00:05:55.491 06:44:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:55.491 06:44:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:55.491 06:44:02 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:55.491 06:44:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.491 06:44:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.491 06:44:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.491 ************************************ 00:05:55.491 START TEST exit_on_failed_rpc_init 00:05:55.491 ************************************ 00:05:55.491 06:44:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:55.491 06:44:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1142207 00:05:55.491 06:44:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1142207 00:05:55.491 06:44:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.491 06:44:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1142207 ']' 00:05:55.491 06:44:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.491 06:44:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.491 06:44:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.491 06:44:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.491 06:44:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:55.491 [2024-12-12 06:44:02.993257] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:55.491 [2024-12-12 06:44:02.993324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1142207 ] 00:05:55.748 [2024-12-12 06:44:03.063977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.748 [2024-12-12 06:44:03.105143] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.005 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.006 [2024-12-12 06:44:03.353475] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:56.006 [2024-12-12 06:44:03.353560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1142219 ] 00:05:56.006 [2024-12-12 06:44:03.425345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.006 [2024-12-12 06:44:03.465800] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.006 [2024-12-12 06:44:03.465880] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:56.006 [2024-12-12 06:44:03.465893] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:56.006 [2024-12-12 06:44:03.465901] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1142207 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1142207 ']' 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1142207 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.006 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1142207 00:05:56.263 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.263 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.263 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1142207' 00:05:56.263 killing process with pid 1142207 00:05:56.263 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1142207 00:05:56.263 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1142207 00:05:56.520 00:05:56.520 real 0m0.896s 00:05:56.520 user 0m0.909s 00:05:56.520 sys 0m0.406s 00:05:56.520 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.520 06:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.520 ************************************ 00:05:56.520 END TEST exit_on_failed_rpc_init 00:05:56.520 ************************************ 00:05:56.520 06:44:03 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:56.520 00:05:56.520 real 0m13.135s 00:05:56.520 user 0m12.273s 00:05:56.520 sys 0m1.713s 00:05:56.520 06:44:03 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.520 06:44:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.520 ************************************ 00:05:56.520 END TEST skip_rpc 00:05:56.520 ************************************ 00:05:56.520 06:44:03 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:56.520 06:44:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.520 06:44:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.520 06:44:03 -- common/autotest_common.sh@10 -- # set +x 00:05:56.520 ************************************ 00:05:56.520 START TEST rpc_client 00:05:56.520 ************************************ 00:05:56.520 06:44:03 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:56.777 * Looking for test storage... 00:05:56.778 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:56.778 06:44:04 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:56.778 06:44:04 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:56.778 06:44:04 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:56.778 06:44:04 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.778 06:44:04 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:56.778 06:44:04 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.778 06:44:04 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:56.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.778 --rc genhtml_branch_coverage=1 00:05:56.778 --rc genhtml_function_coverage=1 00:05:56.778 --rc genhtml_legend=1 00:05:56.778 --rc geninfo_all_blocks=1 00:05:56.778 --rc geninfo_unexecuted_blocks=1 00:05:56.778 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:56.778 ' 00:05:56.778 06:44:04 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:56.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.778 --rc genhtml_branch_coverage=1 00:05:56.778 --rc genhtml_function_coverage=1 00:05:56.778 --rc genhtml_legend=1 00:05:56.778 --rc geninfo_all_blocks=1 00:05:56.778 --rc geninfo_unexecuted_blocks=1 00:05:56.778 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:56.778 ' 00:05:56.778 06:44:04 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:56.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.778 --rc genhtml_branch_coverage=1 00:05:56.778 --rc genhtml_function_coverage=1 00:05:56.778 --rc genhtml_legend=1 00:05:56.778 --rc geninfo_all_blocks=1 00:05:56.778 --rc geninfo_unexecuted_blocks=1 00:05:56.778 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:56.778 ' 00:05:56.778 06:44:04 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:56.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.778 --rc genhtml_branch_coverage=1 00:05:56.778 --rc genhtml_function_coverage=1 00:05:56.778 --rc genhtml_legend=1 00:05:56.778 --rc geninfo_all_blocks=1 00:05:56.778 --rc geninfo_unexecuted_blocks=1 00:05:56.778 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:56.778 ' 00:05:56.778 06:44:04 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:56.778 OK 00:05:56.778 06:44:04 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:56.778 00:05:56.778 real 0m0.218s 00:05:56.778 user 0m0.129s 00:05:56.778 sys 0m0.104s 00:05:56.778 06:44:04 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.778 06:44:04 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:56.778 ************************************ 00:05:56.778 END TEST rpc_client 00:05:56.778 ************************************ 00:05:56.778 06:44:04 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:56.778 06:44:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.778 06:44:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.778 06:44:04 -- common/autotest_common.sh@10 -- # set +x 00:05:56.778 ************************************ 00:05:56.778 START TEST json_config 00:05:56.778 ************************************ 00:05:56.778 06:44:04 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:57.037 06:44:04 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:57.037 06:44:04 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:57.037 06:44:04 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:57.037 06:44:04 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:57.037 06:44:04 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.037 06:44:04 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.037 06:44:04 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.037 06:44:04 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.037 06:44:04 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.037 06:44:04 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.037 06:44:04 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.037 06:44:04 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.037 06:44:04 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.037 06:44:04 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.037 06:44:04 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.037 06:44:04 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:57.037 06:44:04 json_config -- scripts/common.sh@345 -- # : 1 00:05:57.037 06:44:04 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.037 06:44:04 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.037 06:44:04 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:57.037 06:44:04 json_config -- scripts/common.sh@353 -- # local d=1 00:05:57.037 06:44:04 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.037 06:44:04 json_config -- scripts/common.sh@355 -- # echo 1 00:05:57.037 06:44:04 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.037 06:44:04 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:57.037 06:44:04 json_config -- scripts/common.sh@353 -- # local d=2 00:05:57.037 06:44:04 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.037 06:44:04 json_config -- scripts/common.sh@355 -- # echo 2 00:05:57.037 06:44:04 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.037 06:44:04 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.037 06:44:04 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.037 06:44:04 json_config -- scripts/common.sh@368 -- # return 0 00:05:57.037 06:44:04 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.037 06:44:04 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:57.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.037 --rc genhtml_branch_coverage=1 00:05:57.037 --rc genhtml_function_coverage=1 00:05:57.037 --rc genhtml_legend=1 00:05:57.037 --rc geninfo_all_blocks=1 00:05:57.037 --rc geninfo_unexecuted_blocks=1 00:05:57.037 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:57.037 ' 00:05:57.037 06:44:04 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:57.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.037 --rc genhtml_branch_coverage=1 00:05:57.037 --rc genhtml_function_coverage=1 00:05:57.037 --rc genhtml_legend=1 00:05:57.037 --rc geninfo_all_blocks=1 00:05:57.037 --rc geninfo_unexecuted_blocks=1 00:05:57.037 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:57.037 ' 00:05:57.037 06:44:04 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:57.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.037 --rc genhtml_branch_coverage=1 00:05:57.037 --rc genhtml_function_coverage=1 00:05:57.037 --rc genhtml_legend=1 00:05:57.037 --rc geninfo_all_blocks=1 00:05:57.037 --rc geninfo_unexecuted_blocks=1 00:05:57.037 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:57.037 ' 00:05:57.037 06:44:04 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:57.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.037 --rc genhtml_branch_coverage=1 00:05:57.037 --rc genhtml_function_coverage=1 00:05:57.037 --rc genhtml_legend=1 00:05:57.037 --rc geninfo_all_blocks=1 00:05:57.037 --rc geninfo_unexecuted_blocks=1 00:05:57.037 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:57.037 ' 00:05:57.037 06:44:04 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:57.037 06:44:04 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.037 06:44:04 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.037 06:44:04 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.037 06:44:04 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.037 06:44:04 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.037 06:44:04 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.037 06:44:04 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.037 06:44:04 json_config -- paths/export.sh@5 -- # export PATH 00:05:57.037 06:44:04 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@51 -- # : 0 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.037 06:44:04 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:57.038 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:57.038 06:44:04 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:57.038 06:44:04 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:57.038 06:44:04 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:57.038 06:44:04 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:57.038 06:44:04 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:57.038 06:44:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:57.038 06:44:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:57.038 06:44:04 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:57.038 06:44:04 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:57.038 WARNING: No tests are enabled so not running JSON configuration tests 00:05:57.038 06:44:04 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:57.038 00:05:57.038 real 0m0.189s 00:05:57.038 user 0m0.117s 00:05:57.038 sys 0m0.081s 00:05:57.038 06:44:04 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.038 06:44:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.038 ************************************ 00:05:57.038 END TEST json_config 00:05:57.038 ************************************ 00:05:57.038 06:44:04 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:57.038 06:44:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.038 06:44:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.038 06:44:04 -- common/autotest_common.sh@10 -- # set +x 00:05:57.038 ************************************ 00:05:57.038 START TEST json_config_extra_key 00:05:57.038 ************************************ 00:05:57.038 06:44:04 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:57.296 06:44:04 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:57.296 06:44:04 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:57.296 06:44:04 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:57.296 06:44:04 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.296 06:44:04 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:57.296 06:44:04 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.296 06:44:04 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:57.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.296 --rc genhtml_branch_coverage=1 00:05:57.296 --rc genhtml_function_coverage=1 00:05:57.296 --rc genhtml_legend=1 00:05:57.296 --rc geninfo_all_blocks=1 00:05:57.296 --rc geninfo_unexecuted_blocks=1 00:05:57.296 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:57.296 ' 00:05:57.296 06:44:04 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:57.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.296 --rc genhtml_branch_coverage=1 00:05:57.296 --rc genhtml_function_coverage=1 00:05:57.296 --rc genhtml_legend=1 00:05:57.296 --rc geninfo_all_blocks=1 00:05:57.296 --rc geninfo_unexecuted_blocks=1 00:05:57.296 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:57.296 ' 00:05:57.296 06:44:04 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:57.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.296 --rc genhtml_branch_coverage=1 00:05:57.297 --rc genhtml_function_coverage=1 00:05:57.297 --rc genhtml_legend=1 00:05:57.297 --rc geninfo_all_blocks=1 00:05:57.297 --rc geninfo_unexecuted_blocks=1 00:05:57.297 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:57.297 ' 00:05:57.297 06:44:04 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:57.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.297 --rc genhtml_branch_coverage=1 00:05:57.297 --rc genhtml_function_coverage=1 00:05:57.297 --rc genhtml_legend=1 00:05:57.297 --rc geninfo_all_blocks=1 00:05:57.297 --rc geninfo_unexecuted_blocks=1 00:05:57.297 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:57.297 ' 00:05:57.297 06:44:04 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:57.297 06:44:04 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.297 06:44:04 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.297 06:44:04 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.297 06:44:04 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.297 06:44:04 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.297 06:44:04 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.297 06:44:04 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.297 06:44:04 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:57.297 06:44:04 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:57.297 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:57.297 06:44:04 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:57.297 06:44:04 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:57.297 06:44:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:57.297 06:44:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:57.297 06:44:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:57.297 06:44:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:57.297 06:44:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:57.297 06:44:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:57.297 06:44:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:57.297 06:44:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:57.297 06:44:04 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:57.297 06:44:04 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:57.297 INFO: launching applications... 00:05:57.297 06:44:04 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:57.297 06:44:04 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:57.297 06:44:04 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:57.297 06:44:04 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:57.297 06:44:04 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:57.297 06:44:04 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:57.297 06:44:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.297 06:44:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.297 06:44:04 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1142656 00:05:57.297 06:44:04 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:57.297 Waiting for target to run... 00:05:57.297 06:44:04 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1142656 /var/tmp/spdk_tgt.sock 00:05:57.297 06:44:04 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:57.297 06:44:04 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1142656 ']' 00:05:57.297 06:44:04 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.297 06:44:04 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.297 06:44:04 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.297 06:44:04 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.297 06:44:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:57.297 [2024-12-12 06:44:04.767035] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:57.297 [2024-12-12 06:44:04.767101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1142656 ] 00:05:57.862 [2024-12-12 06:44:05.213106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.862 [2024-12-12 06:44:05.267527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.119 06:44:05 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.119 06:44:05 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:58.119 06:44:05 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:58.119 00:05:58.119 06:44:05 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:58.119 INFO: shutting down applications... 00:05:58.119 06:44:05 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:58.119 06:44:05 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:58.119 06:44:05 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:58.119 06:44:05 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1142656 ]] 00:05:58.119 06:44:05 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1142656 00:05:58.119 06:44:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:58.119 06:44:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.119 06:44:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1142656 00:05:58.119 06:44:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:58.682 06:44:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:58.682 06:44:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.682 06:44:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1142656 00:05:58.682 06:44:06 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:58.682 06:44:06 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:58.682 06:44:06 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:58.682 06:44:06 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:58.682 SPDK target shutdown done 00:05:58.682 06:44:06 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:58.682 Success 00:05:58.682 00:05:58.682 real 0m1.587s 00:05:58.682 user 0m1.171s 00:05:58.682 sys 0m0.584s 00:05:58.682 06:44:06 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.682 06:44:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:58.682 ************************************ 00:05:58.682 END TEST json_config_extra_key 00:05:58.682 ************************************ 00:05:58.682 06:44:06 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:58.682 06:44:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.682 06:44:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.682 06:44:06 -- common/autotest_common.sh@10 -- # set +x 00:05:58.939 ************************************ 00:05:58.939 START TEST alias_rpc 00:05:58.939 ************************************ 00:05:58.939 06:44:06 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:58.939 * Looking for test storage... 00:05:58.939 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:58.939 06:44:06 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:58.939 06:44:06 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:58.939 06:44:06 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:58.939 06:44:06 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.939 06:44:06 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:58.939 06:44:06 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.939 06:44:06 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:58.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.939 --rc genhtml_branch_coverage=1 00:05:58.939 --rc genhtml_function_coverage=1 00:05:58.939 --rc genhtml_legend=1 00:05:58.939 --rc geninfo_all_blocks=1 00:05:58.939 --rc geninfo_unexecuted_blocks=1 00:05:58.939 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:58.939 ' 00:05:58.939 06:44:06 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:58.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.939 --rc genhtml_branch_coverage=1 00:05:58.939 --rc genhtml_function_coverage=1 00:05:58.939 --rc genhtml_legend=1 00:05:58.939 --rc geninfo_all_blocks=1 00:05:58.939 --rc geninfo_unexecuted_blocks=1 00:05:58.939 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:58.939 ' 00:05:58.939 06:44:06 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:58.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.939 --rc genhtml_branch_coverage=1 00:05:58.939 --rc genhtml_function_coverage=1 00:05:58.939 --rc genhtml_legend=1 00:05:58.939 --rc geninfo_all_blocks=1 00:05:58.939 --rc geninfo_unexecuted_blocks=1 00:05:58.939 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:58.939 ' 00:05:58.939 06:44:06 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:58.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.939 --rc genhtml_branch_coverage=1 00:05:58.939 --rc genhtml_function_coverage=1 00:05:58.939 --rc genhtml_legend=1 00:05:58.939 --rc geninfo_all_blocks=1 00:05:58.939 --rc geninfo_unexecuted_blocks=1 00:05:58.939 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:58.939 ' 00:05:58.939 06:44:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:58.939 06:44:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1142976 00:05:58.939 06:44:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1142976 00:05:58.939 06:44:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.939 06:44:06 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1142976 ']' 00:05:58.939 06:44:06 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.939 06:44:06 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.939 06:44:06 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.939 06:44:06 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.939 06:44:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.939 [2024-12-12 06:44:06.427143] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:58.939 [2024-12-12 06:44:06.427229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1142976 ] 00:05:59.197 [2024-12-12 06:44:06.497914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.197 [2024-12-12 06:44:06.537594] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.453 06:44:06 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.453 06:44:06 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:59.453 06:44:06 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:59.453 06:44:06 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1142976 00:05:59.453 06:44:06 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1142976 ']' 00:05:59.453 06:44:06 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1142976 00:05:59.453 06:44:06 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:59.453 06:44:06 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.453 06:44:06 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1142976 00:05:59.710 06:44:07 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.710 06:44:07 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.710 06:44:07 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1142976' 00:05:59.710 killing process with pid 1142976 00:05:59.710 06:44:07 alias_rpc -- common/autotest_common.sh@973 -- # kill 1142976 00:05:59.710 06:44:07 alias_rpc -- common/autotest_common.sh@978 -- # wait 1142976 00:05:59.968 00:05:59.968 real 0m1.111s 00:05:59.968 user 0m1.117s 00:05:59.968 sys 0m0.434s 00:05:59.968 06:44:07 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.968 06:44:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.968 ************************************ 00:05:59.968 END TEST alias_rpc 00:05:59.968 ************************************ 00:05:59.968 06:44:07 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:59.968 06:44:07 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:59.968 06:44:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.968 06:44:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.968 06:44:07 -- common/autotest_common.sh@10 -- # set +x 00:05:59.968 ************************************ 00:05:59.968 START TEST spdkcli_tcp 00:05:59.968 ************************************ 00:05:59.968 06:44:07 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:00.225 * Looking for test storage... 00:06:00.225 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:06:00.225 06:44:07 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:00.225 06:44:07 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:00.225 06:44:07 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:00.225 06:44:07 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.226 06:44:07 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:00.226 06:44:07 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.226 06:44:07 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:00.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.226 --rc genhtml_branch_coverage=1 00:06:00.226 --rc genhtml_function_coverage=1 00:06:00.226 --rc genhtml_legend=1 00:06:00.226 --rc geninfo_all_blocks=1 00:06:00.226 --rc geninfo_unexecuted_blocks=1 00:06:00.226 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:00.226 ' 00:06:00.226 06:44:07 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:00.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.226 --rc genhtml_branch_coverage=1 00:06:00.226 --rc genhtml_function_coverage=1 00:06:00.226 --rc genhtml_legend=1 00:06:00.226 --rc geninfo_all_blocks=1 00:06:00.226 --rc geninfo_unexecuted_blocks=1 00:06:00.226 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:00.226 ' 00:06:00.226 06:44:07 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:00.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.226 --rc genhtml_branch_coverage=1 00:06:00.226 --rc genhtml_function_coverage=1 00:06:00.226 --rc genhtml_legend=1 00:06:00.226 --rc geninfo_all_blocks=1 00:06:00.226 --rc geninfo_unexecuted_blocks=1 00:06:00.226 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:00.226 ' 00:06:00.226 06:44:07 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:00.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.226 --rc genhtml_branch_coverage=1 00:06:00.226 --rc genhtml_function_coverage=1 00:06:00.226 --rc genhtml_legend=1 00:06:00.226 --rc geninfo_all_blocks=1 00:06:00.226 --rc geninfo_unexecuted_blocks=1 00:06:00.226 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:00.226 ' 00:06:00.226 06:44:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:06:00.226 06:44:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:00.226 06:44:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:06:00.226 06:44:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:00.226 06:44:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:00.226 06:44:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:00.226 06:44:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:00.226 06:44:07 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.226 06:44:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.226 06:44:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1143303 00:06:00.226 06:44:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1143303 00:06:00.226 06:44:07 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1143303 ']' 00:06:00.226 06:44:07 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.226 06:44:07 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.226 06:44:07 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.226 06:44:07 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.226 06:44:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.226 06:44:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:00.226 [2024-12-12 06:44:07.626171] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:00.226 [2024-12-12 06:44:07.626233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143303 ] 00:06:00.226 [2024-12-12 06:44:07.696372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.226 [2024-12-12 06:44:07.740446] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.226 [2024-12-12 06:44:07.740449] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.483 06:44:07 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.483 06:44:07 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:00.483 06:44:07 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1143312 00:06:00.483 06:44:07 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:00.483 06:44:07 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:00.740 [ 00:06:00.740 "spdk_get_version", 00:06:00.740 "rpc_get_methods", 00:06:00.740 "notify_get_notifications", 00:06:00.740 "notify_get_types", 00:06:00.740 "trace_get_info", 00:06:00.740 "trace_get_tpoint_group_mask", 00:06:00.740 "trace_disable_tpoint_group", 00:06:00.740 "trace_enable_tpoint_group", 00:06:00.740 "trace_clear_tpoint_mask", 00:06:00.740 "trace_set_tpoint_mask", 00:06:00.740 "fsdev_set_opts", 00:06:00.740 "fsdev_get_opts", 00:06:00.740 "framework_get_pci_devices", 00:06:00.740 "framework_get_config", 00:06:00.740 "framework_get_subsystems", 00:06:00.740 "vfu_tgt_set_base_path", 00:06:00.740 "keyring_get_keys", 00:06:00.740 "iobuf_get_stats", 00:06:00.740 "iobuf_set_options", 00:06:00.740 "sock_get_default_impl", 00:06:00.740 "sock_set_default_impl", 00:06:00.740 "sock_impl_set_options", 00:06:00.740 "sock_impl_get_options", 00:06:00.740 "vmd_rescan", 00:06:00.740 "vmd_remove_device", 00:06:00.740 "vmd_enable", 00:06:00.740 "accel_get_stats", 00:06:00.740 "accel_set_options", 00:06:00.740 "accel_set_driver", 00:06:00.740 "accel_crypto_key_destroy", 00:06:00.740 "accel_crypto_keys_get", 00:06:00.740 "accel_crypto_key_create", 00:06:00.740 "accel_assign_opc", 00:06:00.740 "accel_get_module_info", 00:06:00.740 "accel_get_opc_assignments", 00:06:00.740 "bdev_get_histogram", 00:06:00.740 "bdev_enable_histogram", 00:06:00.740 "bdev_set_qos_limit", 00:06:00.740 "bdev_set_qd_sampling_period", 00:06:00.740 "bdev_get_bdevs", 00:06:00.740 "bdev_reset_iostat", 00:06:00.740 "bdev_get_iostat", 00:06:00.740 "bdev_examine", 00:06:00.740 "bdev_wait_for_examine", 00:06:00.740 "bdev_set_options", 00:06:00.740 "scsi_get_devices", 00:06:00.740 "thread_set_cpumask", 00:06:00.740 "scheduler_set_options", 00:06:00.740 "framework_get_governor", 00:06:00.740 "framework_get_scheduler", 00:06:00.740 "framework_set_scheduler", 00:06:00.740 "framework_get_reactors", 00:06:00.740 "thread_get_io_channels", 00:06:00.740 "thread_get_pollers", 00:06:00.740 "thread_get_stats", 00:06:00.740 "framework_monitor_context_switch", 00:06:00.740 "spdk_kill_instance", 00:06:00.740 "log_enable_timestamps", 00:06:00.740 "log_get_flags", 00:06:00.740 "log_clear_flag", 00:06:00.740 "log_set_flag", 00:06:00.740 "log_get_level", 00:06:00.740 "log_set_level", 00:06:00.740 "log_get_print_level", 00:06:00.740 "log_set_print_level", 00:06:00.740 "framework_enable_cpumask_locks", 00:06:00.740 "framework_disable_cpumask_locks", 00:06:00.740 "framework_wait_init", 00:06:00.740 "framework_start_init", 00:06:00.740 "virtio_blk_create_transport", 00:06:00.740 "virtio_blk_get_transports", 00:06:00.740 "vhost_controller_set_coalescing", 00:06:00.740 "vhost_get_controllers", 00:06:00.740 "vhost_delete_controller", 00:06:00.740 "vhost_create_blk_controller", 00:06:00.740 "vhost_scsi_controller_remove_target", 00:06:00.740 "vhost_scsi_controller_add_target", 00:06:00.740 "vhost_start_scsi_controller", 00:06:00.740 "vhost_create_scsi_controller", 00:06:00.740 "ublk_recover_disk", 00:06:00.740 "ublk_get_disks", 00:06:00.740 "ublk_stop_disk", 00:06:00.740 "ublk_start_disk", 00:06:00.740 "ublk_destroy_target", 00:06:00.740 "ublk_create_target", 00:06:00.740 "nbd_get_disks", 00:06:00.740 "nbd_stop_disk", 00:06:00.740 "nbd_start_disk", 00:06:00.740 "env_dpdk_get_mem_stats", 00:06:00.740 "nvmf_stop_mdns_prr", 00:06:00.740 "nvmf_publish_mdns_prr", 00:06:00.740 "nvmf_subsystem_get_listeners", 00:06:00.740 "nvmf_subsystem_get_qpairs", 00:06:00.740 "nvmf_subsystem_get_controllers", 00:06:00.741 "nvmf_get_stats", 00:06:00.741 "nvmf_get_transports", 00:06:00.741 "nvmf_create_transport", 00:06:00.741 "nvmf_get_targets", 00:06:00.741 "nvmf_delete_target", 00:06:00.741 "nvmf_create_target", 00:06:00.741 "nvmf_subsystem_allow_any_host", 00:06:00.741 "nvmf_subsystem_set_keys", 00:06:00.741 "nvmf_subsystem_remove_host", 00:06:00.741 "nvmf_subsystem_add_host", 00:06:00.741 "nvmf_ns_remove_host", 00:06:00.741 "nvmf_ns_add_host", 00:06:00.741 "nvmf_subsystem_remove_ns", 00:06:00.741 "nvmf_subsystem_set_ns_ana_group", 00:06:00.741 "nvmf_subsystem_add_ns", 00:06:00.741 "nvmf_subsystem_listener_set_ana_state", 00:06:00.741 "nvmf_discovery_get_referrals", 00:06:00.741 "nvmf_discovery_remove_referral", 00:06:00.741 "nvmf_discovery_add_referral", 00:06:00.741 "nvmf_subsystem_remove_listener", 00:06:00.741 "nvmf_subsystem_add_listener", 00:06:00.741 "nvmf_delete_subsystem", 00:06:00.741 "nvmf_create_subsystem", 00:06:00.741 "nvmf_get_subsystems", 00:06:00.741 "nvmf_set_crdt", 00:06:00.741 "nvmf_set_config", 00:06:00.741 "nvmf_set_max_subsystems", 00:06:00.741 "iscsi_get_histogram", 00:06:00.741 "iscsi_enable_histogram", 00:06:00.741 "iscsi_set_options", 00:06:00.741 "iscsi_get_auth_groups", 00:06:00.741 "iscsi_auth_group_remove_secret", 00:06:00.741 "iscsi_auth_group_add_secret", 00:06:00.741 "iscsi_delete_auth_group", 00:06:00.741 "iscsi_create_auth_group", 00:06:00.741 "iscsi_set_discovery_auth", 00:06:00.741 "iscsi_get_options", 00:06:00.741 "iscsi_target_node_request_logout", 00:06:00.741 "iscsi_target_node_set_redirect", 00:06:00.741 "iscsi_target_node_set_auth", 00:06:00.741 "iscsi_target_node_add_lun", 00:06:00.741 "iscsi_get_stats", 00:06:00.741 "iscsi_get_connections", 00:06:00.741 "iscsi_portal_group_set_auth", 00:06:00.741 "iscsi_start_portal_group", 00:06:00.741 "iscsi_delete_portal_group", 00:06:00.741 "iscsi_create_portal_group", 00:06:00.741 "iscsi_get_portal_groups", 00:06:00.741 "iscsi_delete_target_node", 00:06:00.741 "iscsi_target_node_remove_pg_ig_maps", 00:06:00.741 "iscsi_target_node_add_pg_ig_maps", 00:06:00.741 "iscsi_create_target_node", 00:06:00.741 "iscsi_get_target_nodes", 00:06:00.741 "iscsi_delete_initiator_group", 00:06:00.741 "iscsi_initiator_group_remove_initiators", 00:06:00.741 "iscsi_initiator_group_add_initiators", 00:06:00.741 "iscsi_create_initiator_group", 00:06:00.741 "iscsi_get_initiator_groups", 00:06:00.741 "fsdev_aio_delete", 00:06:00.741 "fsdev_aio_create", 00:06:00.741 "keyring_linux_set_options", 00:06:00.741 "keyring_file_remove_key", 00:06:00.741 "keyring_file_add_key", 00:06:00.741 "vfu_virtio_create_fs_endpoint", 00:06:00.741 "vfu_virtio_create_scsi_endpoint", 00:06:00.741 "vfu_virtio_scsi_remove_target", 00:06:00.741 "vfu_virtio_scsi_add_target", 00:06:00.741 "vfu_virtio_create_blk_endpoint", 00:06:00.741 "vfu_virtio_delete_endpoint", 00:06:00.741 "iaa_scan_accel_module", 00:06:00.741 "dsa_scan_accel_module", 00:06:00.741 "ioat_scan_accel_module", 00:06:00.741 "accel_error_inject_error", 00:06:00.741 "bdev_iscsi_delete", 00:06:00.741 "bdev_iscsi_create", 00:06:00.741 "bdev_iscsi_set_options", 00:06:00.741 "bdev_virtio_attach_controller", 00:06:00.741 "bdev_virtio_scsi_get_devices", 00:06:00.741 "bdev_virtio_detach_controller", 00:06:00.741 "bdev_virtio_blk_set_hotplug", 00:06:00.741 "bdev_ftl_set_property", 00:06:00.741 "bdev_ftl_get_properties", 00:06:00.741 "bdev_ftl_get_stats", 00:06:00.741 "bdev_ftl_unmap", 00:06:00.741 "bdev_ftl_unload", 00:06:00.741 "bdev_ftl_delete", 00:06:00.741 "bdev_ftl_load", 00:06:00.741 "bdev_ftl_create", 00:06:00.741 "bdev_aio_delete", 00:06:00.741 "bdev_aio_rescan", 00:06:00.741 "bdev_aio_create", 00:06:00.741 "blobfs_create", 00:06:00.741 "blobfs_detect", 00:06:00.741 "blobfs_set_cache_size", 00:06:00.741 "bdev_zone_block_delete", 00:06:00.741 "bdev_zone_block_create", 00:06:00.741 "bdev_delay_delete", 00:06:00.741 "bdev_delay_create", 00:06:00.741 "bdev_delay_update_latency", 00:06:00.741 "bdev_split_delete", 00:06:00.741 "bdev_split_create", 00:06:00.741 "bdev_error_inject_error", 00:06:00.741 "bdev_error_delete", 00:06:00.741 "bdev_error_create", 00:06:00.741 "bdev_raid_set_options", 00:06:00.741 "bdev_raid_remove_base_bdev", 00:06:00.741 "bdev_raid_add_base_bdev", 00:06:00.741 "bdev_raid_delete", 00:06:00.741 "bdev_raid_create", 00:06:00.741 "bdev_raid_get_bdevs", 00:06:00.741 "bdev_lvol_set_parent_bdev", 00:06:00.741 "bdev_lvol_set_parent", 00:06:00.741 "bdev_lvol_check_shallow_copy", 00:06:00.741 "bdev_lvol_start_shallow_copy", 00:06:00.741 "bdev_lvol_grow_lvstore", 00:06:00.741 "bdev_lvol_get_lvols", 00:06:00.741 "bdev_lvol_get_lvstores", 00:06:00.741 "bdev_lvol_delete", 00:06:00.741 "bdev_lvol_set_read_only", 00:06:00.741 "bdev_lvol_resize", 00:06:00.741 "bdev_lvol_decouple_parent", 00:06:00.741 "bdev_lvol_inflate", 00:06:00.741 "bdev_lvol_rename", 00:06:00.741 "bdev_lvol_clone_bdev", 00:06:00.741 "bdev_lvol_clone", 00:06:00.741 "bdev_lvol_snapshot", 00:06:00.741 "bdev_lvol_create", 00:06:00.741 "bdev_lvol_delete_lvstore", 00:06:00.741 "bdev_lvol_rename_lvstore", 00:06:00.741 "bdev_lvol_create_lvstore", 00:06:00.741 "bdev_passthru_delete", 00:06:00.741 "bdev_passthru_create", 00:06:00.741 "bdev_nvme_cuse_unregister", 00:06:00.741 "bdev_nvme_cuse_register", 00:06:00.741 "bdev_opal_new_user", 00:06:00.741 "bdev_opal_set_lock_state", 00:06:00.741 "bdev_opal_delete", 00:06:00.741 "bdev_opal_get_info", 00:06:00.741 "bdev_opal_create", 00:06:00.741 "bdev_nvme_opal_revert", 00:06:00.741 "bdev_nvme_opal_init", 00:06:00.741 "bdev_nvme_send_cmd", 00:06:00.741 "bdev_nvme_set_keys", 00:06:00.741 "bdev_nvme_get_path_iostat", 00:06:00.741 "bdev_nvme_get_mdns_discovery_info", 00:06:00.741 "bdev_nvme_stop_mdns_discovery", 00:06:00.741 "bdev_nvme_start_mdns_discovery", 00:06:00.741 "bdev_nvme_set_multipath_policy", 00:06:00.741 "bdev_nvme_set_preferred_path", 00:06:00.741 "bdev_nvme_get_io_paths", 00:06:00.741 "bdev_nvme_remove_error_injection", 00:06:00.741 "bdev_nvme_add_error_injection", 00:06:00.741 "bdev_nvme_get_discovery_info", 00:06:00.741 "bdev_nvme_stop_discovery", 00:06:00.741 "bdev_nvme_start_discovery", 00:06:00.741 "bdev_nvme_get_controller_health_info", 00:06:00.741 "bdev_nvme_disable_controller", 00:06:00.741 "bdev_nvme_enable_controller", 00:06:00.741 "bdev_nvme_reset_controller", 00:06:00.741 "bdev_nvme_get_transport_statistics", 00:06:00.741 "bdev_nvme_apply_firmware", 00:06:00.741 "bdev_nvme_detach_controller", 00:06:00.741 "bdev_nvme_get_controllers", 00:06:00.741 "bdev_nvme_attach_controller", 00:06:00.741 "bdev_nvme_set_hotplug", 00:06:00.741 "bdev_nvme_set_options", 00:06:00.741 "bdev_null_resize", 00:06:00.741 "bdev_null_delete", 00:06:00.741 "bdev_null_create", 00:06:00.741 "bdev_malloc_delete", 00:06:00.741 "bdev_malloc_create" 00:06:00.741 ] 00:06:00.741 06:44:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:00.741 06:44:08 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:00.741 06:44:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.741 06:44:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:00.741 06:44:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1143303 00:06:00.741 06:44:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1143303 ']' 00:06:00.741 06:44:08 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1143303 00:06:00.741 06:44:08 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:00.741 06:44:08 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.741 06:44:08 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1143303 00:06:00.741 06:44:08 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.741 06:44:08 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.741 06:44:08 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1143303' 00:06:00.741 killing process with pid 1143303 00:06:00.741 06:44:08 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1143303 00:06:00.741 06:44:08 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1143303 00:06:00.999 00:06:00.999 real 0m1.109s 00:06:00.999 user 0m1.833s 00:06:00.999 sys 0m0.483s 00:06:00.999 06:44:08 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.999 06:44:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.999 ************************************ 00:06:00.999 END TEST spdkcli_tcp 00:06:00.999 ************************************ 00:06:01.256 06:44:08 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.256 06:44:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.256 06:44:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.256 06:44:08 -- common/autotest_common.sh@10 -- # set +x 00:06:01.256 ************************************ 00:06:01.256 START TEST dpdk_mem_utility 00:06:01.256 ************************************ 00:06:01.257 06:44:08 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.257 * Looking for test storage... 00:06:01.257 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:06:01.257 06:44:08 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:01.257 06:44:08 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:01.257 06:44:08 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:01.257 06:44:08 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:01.257 06:44:08 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.257 06:44:08 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.257 06:44:08 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.257 06:44:08 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.257 06:44:08 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.257 06:44:08 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.257 06:44:08 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.257 06:44:08 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.257 06:44:08 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.257 06:44:08 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.257 06:44:08 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.257 06:44:08 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:01.257 06:44:08 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:01.257 06:44:08 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.257 06:44:08 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.257 06:44:08 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:01.257 06:44:08 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:01.257 06:44:08 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.257 06:44:08 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:01.257 06:44:08 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.515 06:44:08 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:01.515 06:44:08 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:01.515 06:44:08 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.515 06:44:08 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:01.515 06:44:08 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.515 06:44:08 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.515 06:44:08 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.515 06:44:08 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:01.515 06:44:08 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.515 06:44:08 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:01.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.515 --rc genhtml_branch_coverage=1 00:06:01.515 --rc genhtml_function_coverage=1 00:06:01.515 --rc genhtml_legend=1 00:06:01.515 --rc geninfo_all_blocks=1 00:06:01.515 --rc geninfo_unexecuted_blocks=1 00:06:01.515 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:01.515 ' 00:06:01.515 06:44:08 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:01.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.515 --rc genhtml_branch_coverage=1 00:06:01.515 --rc genhtml_function_coverage=1 00:06:01.515 --rc genhtml_legend=1 00:06:01.515 --rc geninfo_all_blocks=1 00:06:01.515 --rc geninfo_unexecuted_blocks=1 00:06:01.515 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:01.515 ' 00:06:01.515 06:44:08 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:01.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.515 --rc genhtml_branch_coverage=1 00:06:01.515 --rc genhtml_function_coverage=1 00:06:01.515 --rc genhtml_legend=1 00:06:01.515 --rc geninfo_all_blocks=1 00:06:01.515 --rc geninfo_unexecuted_blocks=1 00:06:01.515 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:01.515 ' 00:06:01.515 06:44:08 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:01.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.515 --rc genhtml_branch_coverage=1 00:06:01.515 --rc genhtml_function_coverage=1 00:06:01.515 --rc genhtml_legend=1 00:06:01.515 --rc geninfo_all_blocks=1 00:06:01.515 --rc geninfo_unexecuted_blocks=1 00:06:01.515 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:01.515 ' 00:06:01.515 06:44:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:01.515 06:44:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1143643 00:06:01.515 06:44:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1143643 00:06:01.515 06:44:08 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1143643 ']' 00:06:01.515 06:44:08 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.515 06:44:08 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.515 06:44:08 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.515 06:44:08 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.515 06:44:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.515 06:44:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.515 [2024-12-12 06:44:08.808915] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:01.515 [2024-12-12 06:44:08.808978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143643 ] 00:06:01.515 [2024-12-12 06:44:08.878999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.515 [2024-12-12 06:44:08.921927] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.773 06:44:09 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.773 06:44:09 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:01.773 06:44:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:01.773 06:44:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:01.773 06:44:09 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.773 06:44:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.773 { 00:06:01.773 "filename": "/tmp/spdk_mem_dump.txt" 00:06:01.773 } 00:06:01.773 06:44:09 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.773 06:44:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:01.773 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:01.773 1 heaps totaling size 818.000000 MiB 00:06:01.773 size: 818.000000 MiB heap id: 0 00:06:01.773 end heaps---------- 00:06:01.773 9 mempools totaling size 603.782043 MiB 00:06:01.773 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:01.773 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:01.773 size: 100.555481 MiB name: bdev_io_1143643 00:06:01.773 size: 50.003479 MiB name: msgpool_1143643 00:06:01.773 size: 36.509338 MiB name: fsdev_io_1143643 00:06:01.773 size: 21.763794 MiB name: PDU_Pool 00:06:01.773 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:01.773 size: 4.133484 MiB name: evtpool_1143643 00:06:01.773 size: 0.026123 MiB name: Session_Pool 00:06:01.773 end mempools------- 00:06:01.773 6 memzones totaling size 4.142822 MiB 00:06:01.773 size: 1.000366 MiB name: RG_ring_0_1143643 00:06:01.773 size: 1.000366 MiB name: RG_ring_1_1143643 00:06:01.773 size: 1.000366 MiB name: RG_ring_4_1143643 00:06:01.773 size: 1.000366 MiB name: RG_ring_5_1143643 00:06:01.773 size: 0.125366 MiB name: RG_ring_2_1143643 00:06:01.773 size: 0.015991 MiB name: RG_ring_3_1143643 00:06:01.773 end memzones------- 00:06:01.773 06:44:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:01.773 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:01.773 list of free elements. size: 10.852478 MiB 00:06:01.773 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:01.773 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:01.773 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:01.773 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:01.773 element at address: 0x200008000000 with size: 0.959839 MiB 00:06:01.773 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:01.773 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:01.773 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:01.773 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:06:01.773 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:01.773 element at address: 0x200003e00000 with size: 0.490723 MiB 00:06:01.773 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:01.773 element at address: 0x200010600000 with size: 0.481934 MiB 00:06:01.773 element at address: 0x200028200000 with size: 0.410034 MiB 00:06:01.773 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:01.773 list of standard malloc elements. size: 199.218628 MiB 00:06:01.773 element at address: 0x2000081fff80 with size: 132.000122 MiB 00:06:01.773 element at address: 0x200003ffff80 with size: 64.000122 MiB 00:06:01.773 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:01.773 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:01.773 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:01.773 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:01.773 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:01.773 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:01.773 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:01.773 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:01.773 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:01.773 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:01.773 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:01.773 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:01.773 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:01.773 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:01.773 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:01.773 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:01.773 element at address: 0x20000085b100 with size: 0.000183 MiB 00:06:01.773 element at address: 0x2000008db3c0 with size: 0.000183 MiB 00:06:01.773 element at address: 0x2000008db5c0 with size: 0.000183 MiB 00:06:01.773 element at address: 0x2000008df880 with size: 0.000183 MiB 00:06:01.773 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:01.773 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:01.773 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:01.773 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:01.773 element at address: 0x200003e7da00 with size: 0.000183 MiB 00:06:01.773 element at address: 0x200003e7dac0 with size: 0.000183 MiB 00:06:01.773 element at address: 0x200003efdd80 with size: 0.000183 MiB 00:06:01.773 element at address: 0x2000080fdd80 with size: 0.000183 MiB 00:06:01.773 element at address: 0x20001067b600 with size: 0.000183 MiB 00:06:01.773 element at address: 0x20001067b6c0 with size: 0.000183 MiB 00:06:01.773 element at address: 0x2000106fb980 with size: 0.000183 MiB 00:06:01.773 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:01.773 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:01.773 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:01.773 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:01.773 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:01.773 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:01.773 element at address: 0x200028268f80 with size: 0.000183 MiB 00:06:01.773 element at address: 0x200028269040 with size: 0.000183 MiB 00:06:01.773 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:06:01.773 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:01.773 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:01.773 list of memzone associated elements. size: 607.928894 MiB 00:06:01.773 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:01.773 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:01.773 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:01.773 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:01.773 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:01.773 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1143643_0 00:06:01.773 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:01.773 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1143643_0 00:06:01.773 element at address: 0x2000107fdb80 with size: 36.008911 MiB 00:06:01.773 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1143643_0 00:06:01.774 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:01.774 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:01.774 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:01.774 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:01.774 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:01.774 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1143643_0 00:06:01.774 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:01.774 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1143643 00:06:01.774 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:01.774 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1143643 00:06:01.774 element at address: 0x2000106fba40 with size: 1.008118 MiB 00:06:01.774 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:01.774 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:01.774 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:01.774 element at address: 0x2000080fde40 with size: 1.008118 MiB 00:06:01.774 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:01.774 element at address: 0x200003efde40 with size: 1.008118 MiB 00:06:01.774 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:01.774 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:01.774 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1143643 00:06:01.774 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:01.774 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1143643 00:06:01.774 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:01.774 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1143643 00:06:01.774 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:01.774 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1143643 00:06:01.774 element at address: 0x20000085b1c0 with size: 0.500488 MiB 00:06:01.774 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1143643 00:06:01.774 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:01.774 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1143643 00:06:01.774 element at address: 0x20001067b780 with size: 0.500488 MiB 00:06:01.774 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:01.774 element at address: 0x200003e7db80 with size: 0.500488 MiB 00:06:01.774 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:01.774 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:01.774 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:01.774 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:01.774 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1143643 00:06:01.774 element at address: 0x2000008df940 with size: 0.125488 MiB 00:06:01.774 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1143643 00:06:01.774 element at address: 0x2000080f5b80 with size: 0.031738 MiB 00:06:01.774 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:01.774 element at address: 0x200028269100 with size: 0.023743 MiB 00:06:01.774 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:01.774 element at address: 0x2000008db680 with size: 0.016113 MiB 00:06:01.774 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1143643 00:06:01.774 element at address: 0x20002826f240 with size: 0.002441 MiB 00:06:01.774 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:01.774 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:01.774 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1143643 00:06:01.774 element at address: 0x2000008db480 with size: 0.000305 MiB 00:06:01.774 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1143643 00:06:01.774 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:01.774 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1143643 00:06:01.774 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:06:01.774 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:01.774 06:44:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:01.774 06:44:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1143643 00:06:01.774 06:44:09 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1143643 ']' 00:06:01.774 06:44:09 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1143643 00:06:01.774 06:44:09 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:01.774 06:44:09 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.774 06:44:09 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1143643 00:06:02.032 06:44:09 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.032 06:44:09 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.032 06:44:09 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1143643' 00:06:02.032 killing process with pid 1143643 00:06:02.032 06:44:09 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1143643 00:06:02.032 06:44:09 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1143643 00:06:02.289 00:06:02.289 real 0m0.993s 00:06:02.289 user 0m0.906s 00:06:02.289 sys 0m0.445s 00:06:02.289 06:44:09 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.289 06:44:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:02.289 ************************************ 00:06:02.289 END TEST dpdk_mem_utility 00:06:02.289 ************************************ 00:06:02.290 06:44:09 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:02.290 06:44:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.290 06:44:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.290 06:44:09 -- common/autotest_common.sh@10 -- # set +x 00:06:02.290 ************************************ 00:06:02.290 START TEST event 00:06:02.290 ************************************ 00:06:02.290 06:44:09 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:02.290 * Looking for test storage... 00:06:02.290 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:02.290 06:44:09 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:02.290 06:44:09 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:02.290 06:44:09 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:02.548 06:44:09 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:02.548 06:44:09 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.548 06:44:09 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.548 06:44:09 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.548 06:44:09 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.548 06:44:09 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.548 06:44:09 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.548 06:44:09 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.548 06:44:09 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.548 06:44:09 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.548 06:44:09 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.548 06:44:09 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.548 06:44:09 event -- scripts/common.sh@344 -- # case "$op" in 00:06:02.548 06:44:09 event -- scripts/common.sh@345 -- # : 1 00:06:02.548 06:44:09 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.548 06:44:09 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.548 06:44:09 event -- scripts/common.sh@365 -- # decimal 1 00:06:02.548 06:44:09 event -- scripts/common.sh@353 -- # local d=1 00:06:02.548 06:44:09 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.548 06:44:09 event -- scripts/common.sh@355 -- # echo 1 00:06:02.548 06:44:09 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.548 06:44:09 event -- scripts/common.sh@366 -- # decimal 2 00:06:02.548 06:44:09 event -- scripts/common.sh@353 -- # local d=2 00:06:02.548 06:44:09 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.548 06:44:09 event -- scripts/common.sh@355 -- # echo 2 00:06:02.548 06:44:09 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.548 06:44:09 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.548 06:44:09 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.548 06:44:09 event -- scripts/common.sh@368 -- # return 0 00:06:02.548 06:44:09 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.548 06:44:09 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:02.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.548 --rc genhtml_branch_coverage=1 00:06:02.548 --rc genhtml_function_coverage=1 00:06:02.548 --rc genhtml_legend=1 00:06:02.548 --rc geninfo_all_blocks=1 00:06:02.548 --rc geninfo_unexecuted_blocks=1 00:06:02.548 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:02.548 ' 00:06:02.548 06:44:09 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:02.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.548 --rc genhtml_branch_coverage=1 00:06:02.548 --rc genhtml_function_coverage=1 00:06:02.548 --rc genhtml_legend=1 00:06:02.548 --rc geninfo_all_blocks=1 00:06:02.548 --rc geninfo_unexecuted_blocks=1 00:06:02.548 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:02.548 ' 00:06:02.548 06:44:09 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:02.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.548 --rc genhtml_branch_coverage=1 00:06:02.548 --rc genhtml_function_coverage=1 00:06:02.548 --rc genhtml_legend=1 00:06:02.548 --rc geninfo_all_blocks=1 00:06:02.548 --rc geninfo_unexecuted_blocks=1 00:06:02.548 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:02.548 ' 00:06:02.548 06:44:09 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:02.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.548 --rc genhtml_branch_coverage=1 00:06:02.548 --rc genhtml_function_coverage=1 00:06:02.548 --rc genhtml_legend=1 00:06:02.548 --rc geninfo_all_blocks=1 00:06:02.548 --rc geninfo_unexecuted_blocks=1 00:06:02.548 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:02.548 ' 00:06:02.548 06:44:09 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:02.548 06:44:09 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:02.548 06:44:09 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:02.548 06:44:09 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:02.548 06:44:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.548 06:44:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.548 ************************************ 00:06:02.548 START TEST event_perf 00:06:02.548 ************************************ 00:06:02.548 06:44:09 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:02.548 Running I/O for 1 seconds...[2024-12-12 06:44:09.881994] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:02.548 [2024-12-12 06:44:09.882074] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143795 ] 00:06:02.548 [2024-12-12 06:44:09.956125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:02.548 [2024-12-12 06:44:09.999894] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.548 [2024-12-12 06:44:09.999988] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.548 [2024-12-12 06:44:10.000088] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.548 [2024-12-12 06:44:10.000091] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.603 Running I/O for 1 seconds... 00:06:03.603 lcore 0: 185539 00:06:03.603 lcore 1: 185537 00:06:03.603 lcore 2: 185539 00:06:03.603 lcore 3: 185539 00:06:03.603 done. 00:06:03.603 00:06:03.603 real 0m1.176s 00:06:03.603 user 0m4.090s 00:06:03.603 sys 0m0.082s 00:06:03.603 06:44:11 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.603 06:44:11 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.603 ************************************ 00:06:03.603 END TEST event_perf 00:06:03.603 ************************************ 00:06:03.603 06:44:11 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:03.603 06:44:11 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:03.603 06:44:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.603 06:44:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.603 ************************************ 00:06:03.603 START TEST event_reactor 00:06:03.603 ************************************ 00:06:03.603 06:44:11 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:03.861 [2024-12-12 06:44:11.127412] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:03.862 [2024-12-12 06:44:11.127467] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144012 ] 00:06:03.862 [2024-12-12 06:44:11.195094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.862 [2024-12-12 06:44:11.234532] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.796 test_start 00:06:04.796 oneshot 00:06:04.796 tick 100 00:06:04.796 tick 100 00:06:04.796 tick 250 00:06:04.796 tick 100 00:06:04.796 tick 100 00:06:04.796 tick 100 00:06:04.796 tick 250 00:06:04.796 tick 500 00:06:04.796 tick 100 00:06:04.796 tick 100 00:06:04.796 tick 250 00:06:04.796 tick 100 00:06:04.796 tick 100 00:06:04.796 test_end 00:06:04.796 00:06:04.796 real 0m1.151s 00:06:04.796 user 0m1.076s 00:06:04.796 sys 0m0.070s 00:06:04.796 06:44:12 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.796 06:44:12 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:04.796 ************************************ 00:06:04.796 END TEST event_reactor 00:06:04.796 ************************************ 00:06:04.796 06:44:12 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:04.796 06:44:12 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:04.796 06:44:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.796 06:44:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.054 ************************************ 00:06:05.054 START TEST event_reactor_perf 00:06:05.054 ************************************ 00:06:05.054 06:44:12 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.054 [2024-12-12 06:44:12.368640] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:05.054 [2024-12-12 06:44:12.368720] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144296 ] 00:06:05.054 [2024-12-12 06:44:12.440858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.055 [2024-12-12 06:44:12.479891] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.429 test_start 00:06:06.429 test_end 00:06:06.429 Performance: 946941 events per second 00:06:06.429 00:06:06.429 real 0m1.164s 00:06:06.429 user 0m1.088s 00:06:06.429 sys 0m0.072s 00:06:06.429 06:44:13 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.429 06:44:13 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.429 ************************************ 00:06:06.429 END TEST event_reactor_perf 00:06:06.429 ************************************ 00:06:06.429 06:44:13 event -- event/event.sh@49 -- # uname -s 00:06:06.429 06:44:13 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:06.429 06:44:13 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:06.429 06:44:13 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.429 06:44:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.429 06:44:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.429 ************************************ 00:06:06.429 START TEST event_scheduler 00:06:06.429 ************************************ 00:06:06.429 06:44:13 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:06.429 * Looking for test storage... 00:06:06.429 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:06:06.429 06:44:13 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:06.429 06:44:13 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:06.429 06:44:13 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:06.429 06:44:13 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.429 06:44:13 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:06.429 06:44:13 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.429 06:44:13 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:06.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.429 --rc genhtml_branch_coverage=1 00:06:06.429 --rc genhtml_function_coverage=1 00:06:06.429 --rc genhtml_legend=1 00:06:06.429 --rc geninfo_all_blocks=1 00:06:06.429 --rc geninfo_unexecuted_blocks=1 00:06:06.429 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:06.429 ' 00:06:06.429 06:44:13 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:06.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.430 --rc genhtml_branch_coverage=1 00:06:06.430 --rc genhtml_function_coverage=1 00:06:06.430 --rc genhtml_legend=1 00:06:06.430 --rc geninfo_all_blocks=1 00:06:06.430 --rc geninfo_unexecuted_blocks=1 00:06:06.430 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:06.430 ' 00:06:06.430 06:44:13 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:06.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.430 --rc genhtml_branch_coverage=1 00:06:06.430 --rc genhtml_function_coverage=1 00:06:06.430 --rc genhtml_legend=1 00:06:06.430 --rc geninfo_all_blocks=1 00:06:06.430 --rc geninfo_unexecuted_blocks=1 00:06:06.430 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:06.430 ' 00:06:06.430 06:44:13 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:06.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.430 --rc genhtml_branch_coverage=1 00:06:06.430 --rc genhtml_function_coverage=1 00:06:06.430 --rc genhtml_legend=1 00:06:06.430 --rc geninfo_all_blocks=1 00:06:06.430 --rc geninfo_unexecuted_blocks=1 00:06:06.430 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:06.430 ' 00:06:06.430 06:44:13 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:06.430 06:44:13 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1144616 00:06:06.430 06:44:13 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.430 06:44:13 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:06.430 06:44:13 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1144616 00:06:06.430 06:44:13 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1144616 ']' 00:06:06.430 06:44:13 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.430 06:44:13 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.430 06:44:13 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.430 06:44:13 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.430 06:44:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.430 [2024-12-12 06:44:13.816784] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:06.430 [2024-12-12 06:44:13.816864] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144616 ] 00:06:06.430 [2024-12-12 06:44:13.884393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.430 [2024-12-12 06:44:13.930585] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.430 [2024-12-12 06:44:13.930670] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.430 [2024-12-12 06:44:13.930752] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.430 [2024-12-12 06:44:13.930754] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.689 06:44:13 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.689 06:44:13 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:06.689 06:44:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:06.689 06:44:13 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.689 06:44:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.689 [2024-12-12 06:44:13.999437] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:06.689 [2024-12-12 06:44:13.999458] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:06.689 [2024-12-12 06:44:13.999470] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:06.689 [2024-12-12 06:44:13.999478] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:06.689 [2024-12-12 06:44:13.999486] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:06.689 06:44:14 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.689 06:44:14 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:06.689 06:44:14 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.689 06:44:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.689 [2024-12-12 06:44:14.073784] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:06.689 06:44:14 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.689 06:44:14 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:06.689 06:44:14 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.689 06:44:14 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.689 06:44:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.689 ************************************ 00:06:06.689 START TEST scheduler_create_thread 00:06:06.689 ************************************ 00:06:06.689 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:06.689 06:44:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:06.689 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.689 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.689 2 00:06:06.689 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.689 06:44:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.690 3 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.690 4 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.690 5 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.690 6 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.690 7 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.690 8 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.690 9 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.690 10 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.690 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.948 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.948 06:44:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:06.948 06:44:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:06.948 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.948 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.206 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.206 06:44:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:07.206 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.206 06:44:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.104 06:44:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.104 06:44:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:09.104 06:44:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:09.104 06:44:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.104 06:44:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.035 06:44:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.035 00:06:10.035 real 0m3.101s 00:06:10.035 user 0m0.024s 00:06:10.035 sys 0m0.007s 00:06:10.035 06:44:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.035 06:44:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.035 ************************************ 00:06:10.035 END TEST scheduler_create_thread 00:06:10.035 ************************************ 00:06:10.035 06:44:17 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:10.035 06:44:17 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1144616 00:06:10.035 06:44:17 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1144616 ']' 00:06:10.035 06:44:17 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1144616 00:06:10.035 06:44:17 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:10.035 06:44:17 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.035 06:44:17 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1144616 00:06:10.035 06:44:17 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:10.035 06:44:17 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:10.035 06:44:17 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1144616' 00:06:10.035 killing process with pid 1144616 00:06:10.035 06:44:17 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1144616 00:06:10.035 06:44:17 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1144616 00:06:10.293 [2024-12-12 06:44:17.597063] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:10.293 00:06:10.293 real 0m4.176s 00:06:10.293 user 0m6.707s 00:06:10.293 sys 0m0.438s 00:06:10.293 06:44:17 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.293 06:44:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.293 ************************************ 00:06:10.293 END TEST event_scheduler 00:06:10.293 ************************************ 00:06:10.551 06:44:17 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:10.551 06:44:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:10.551 06:44:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.551 06:44:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.551 06:44:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.551 ************************************ 00:06:10.551 START TEST app_repeat 00:06:10.551 ************************************ 00:06:10.551 06:44:17 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:10.551 06:44:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.551 06:44:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.551 06:44:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:10.551 06:44:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.551 06:44:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:10.551 06:44:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:10.551 06:44:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:10.551 06:44:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1145422 00:06:10.551 06:44:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.551 06:44:17 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:10.551 06:44:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1145422' 00:06:10.551 Process app_repeat pid: 1145422 00:06:10.551 06:44:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.551 06:44:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:10.551 spdk_app_start Round 0 00:06:10.551 06:44:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1145422 /var/tmp/spdk-nbd.sock 00:06:10.551 06:44:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1145422 ']' 00:06:10.551 06:44:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.551 06:44:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.551 06:44:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.551 06:44:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.551 06:44:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.551 [2024-12-12 06:44:17.903046] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:10.551 [2024-12-12 06:44:17.903117] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145422 ] 00:06:10.551 [2024-12-12 06:44:17.977780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.551 [2024-12-12 06:44:18.022540] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.551 [2024-12-12 06:44:18.022543] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.809 06:44:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.809 06:44:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:10.809 06:44:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.809 Malloc0 00:06:10.809 06:44:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.066 Malloc1 00:06:11.066 06:44:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.066 06:44:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.066 06:44:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.066 06:44:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.066 06:44:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.066 06:44:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.066 06:44:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.066 06:44:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.066 06:44:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.066 06:44:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.066 06:44:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.066 06:44:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.066 06:44:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:11.066 06:44:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.066 06:44:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.066 06:44:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.324 /dev/nbd0 00:06:11.324 06:44:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.324 06:44:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.324 06:44:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:11.324 06:44:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:11.324 06:44:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:11.324 06:44:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:11.324 06:44:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:11.324 06:44:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:11.324 06:44:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:11.324 06:44:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:11.324 06:44:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.324 1+0 records in 00:06:11.324 1+0 records out 00:06:11.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236165 s, 17.3 MB/s 00:06:11.324 06:44:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:11.324 06:44:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:11.324 06:44:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:11.324 06:44:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:11.324 06:44:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:11.324 06:44:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.324 06:44:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.324 06:44:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.581 /dev/nbd1 00:06:11.581 06:44:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.581 06:44:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.581 06:44:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:11.581 06:44:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:11.581 06:44:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:11.581 06:44:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:11.581 06:44:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:11.581 06:44:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:11.581 06:44:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:11.581 06:44:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:11.581 06:44:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.581 1+0 records in 00:06:11.581 1+0 records out 00:06:11.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224084 s, 18.3 MB/s 00:06:11.581 06:44:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:11.581 06:44:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:11.581 06:44:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:11.581 06:44:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:11.581 06:44:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:11.581 06:44:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.581 06:44:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.581 06:44:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.581 06:44:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.581 06:44:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.839 { 00:06:11.839 "nbd_device": "/dev/nbd0", 00:06:11.839 "bdev_name": "Malloc0" 00:06:11.839 }, 00:06:11.839 { 00:06:11.839 "nbd_device": "/dev/nbd1", 00:06:11.839 "bdev_name": "Malloc1" 00:06:11.839 } 00:06:11.839 ]' 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.839 { 00:06:11.839 "nbd_device": "/dev/nbd0", 00:06:11.839 "bdev_name": "Malloc0" 00:06:11.839 }, 00:06:11.839 { 00:06:11.839 "nbd_device": "/dev/nbd1", 00:06:11.839 "bdev_name": "Malloc1" 00:06:11.839 } 00:06:11.839 ]' 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.839 /dev/nbd1' 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.839 /dev/nbd1' 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.839 256+0 records in 00:06:11.839 256+0 records out 00:06:11.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111851 s, 93.7 MB/s 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.839 256+0 records in 00:06:11.839 256+0 records out 00:06:11.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197948 s, 53.0 MB/s 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.839 256+0 records in 00:06:11.839 256+0 records out 00:06:11.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212298 s, 49.4 MB/s 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.839 06:44:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.097 06:44:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.097 06:44:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.097 06:44:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.097 06:44:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.097 06:44:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.097 06:44:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.097 06:44:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.097 06:44:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.097 06:44:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.097 06:44:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.354 06:44:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.354 06:44:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.354 06:44:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.354 06:44:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.354 06:44:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.354 06:44:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.354 06:44:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.354 06:44:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.354 06:44:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.354 06:44:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.354 06:44:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.611 06:44:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.611 06:44:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.611 06:44:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.611 06:44:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.611 06:44:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.611 06:44:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.611 06:44:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:12.611 06:44:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.611 06:44:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.611 06:44:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.611 06:44:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.611 06:44:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.611 06:44:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.869 06:44:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.869 [2024-12-12 06:44:20.371563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.126 [2024-12-12 06:44:20.408978] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.126 [2024-12-12 06:44:20.408980] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.126 [2024-12-12 06:44:20.449990] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.126 [2024-12-12 06:44:20.450032] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:16.403 06:44:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:16.403 06:44:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:16.403 spdk_app_start Round 1 00:06:16.403 06:44:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1145422 /var/tmp/spdk-nbd.sock 00:06:16.403 06:44:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1145422 ']' 00:06:16.403 06:44:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.403 06:44:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.403 06:44:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.403 06:44:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.403 06:44:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:16.403 06:44:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.403 06:44:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:16.403 06:44:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.403 Malloc0 00:06:16.403 06:44:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.403 Malloc1 00:06:16.403 06:44:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.403 06:44:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.403 06:44:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.403 06:44:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.403 06:44:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.403 06:44:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.403 06:44:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.403 06:44:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.403 06:44:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.403 06:44:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.403 06:44:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.403 06:44:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.403 06:44:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:16.403 06:44:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.403 06:44:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.403 06:44:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.661 /dev/nbd0 00:06:16.661 06:44:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.661 06:44:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.661 06:44:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:16.661 06:44:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:16.661 06:44:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.661 06:44:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.661 06:44:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:16.661 06:44:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:16.661 06:44:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.661 06:44:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.661 06:44:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.661 1+0 records in 00:06:16.661 1+0 records out 00:06:16.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269983 s, 15.2 MB/s 00:06:16.661 06:44:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:16.661 06:44:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:16.661 06:44:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:16.661 06:44:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.661 06:44:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:16.661 06:44:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.661 06:44:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.661 06:44:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.918 /dev/nbd1 00:06:16.918 06:44:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.918 06:44:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.918 06:44:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:16.918 06:44:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:16.918 06:44:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.918 06:44:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.918 06:44:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:16.918 06:44:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:16.918 06:44:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.918 06:44:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.918 06:44:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.918 1+0 records in 00:06:16.918 1+0 records out 00:06:16.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249987 s, 16.4 MB/s 00:06:16.918 06:44:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:16.918 06:44:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:16.918 06:44:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:16.918 06:44:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.918 06:44:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:16.918 06:44:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.918 06:44:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.918 06:44:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.918 06:44:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.918 06:44:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.176 { 00:06:17.176 "nbd_device": "/dev/nbd0", 00:06:17.176 "bdev_name": "Malloc0" 00:06:17.176 }, 00:06:17.176 { 00:06:17.176 "nbd_device": "/dev/nbd1", 00:06:17.176 "bdev_name": "Malloc1" 00:06:17.176 } 00:06:17.176 ]' 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.176 { 00:06:17.176 "nbd_device": "/dev/nbd0", 00:06:17.176 "bdev_name": "Malloc0" 00:06:17.176 }, 00:06:17.176 { 00:06:17.176 "nbd_device": "/dev/nbd1", 00:06:17.176 "bdev_name": "Malloc1" 00:06:17.176 } 00:06:17.176 ]' 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.176 /dev/nbd1' 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.176 /dev/nbd1' 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.176 256+0 records in 00:06:17.176 256+0 records out 00:06:17.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106653 s, 98.3 MB/s 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.176 256+0 records in 00:06:17.176 256+0 records out 00:06:17.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196516 s, 53.4 MB/s 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.176 256+0 records in 00:06:17.176 256+0 records out 00:06:17.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211263 s, 49.6 MB/s 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.176 06:44:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.434 06:44:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.434 06:44:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.434 06:44:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.434 06:44:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.434 06:44:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.434 06:44:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.434 06:44:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.434 06:44:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.434 06:44:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.434 06:44:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.692 06:44:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.692 06:44:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.692 06:44:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.692 06:44:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.692 06:44:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.692 06:44:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.692 06:44:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.692 06:44:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.692 06:44:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.692 06:44:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.692 06:44:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.692 06:44:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.692 06:44:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.692 06:44:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.950 06:44:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.950 06:44:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.950 06:44:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.950 06:44:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:17.950 06:44:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.950 06:44:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.950 06:44:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.950 06:44:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.950 06:44:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.950 06:44:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.950 06:44:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:18.208 [2024-12-12 06:44:25.610459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.208 [2024-12-12 06:44:25.646823] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.208 [2024-12-12 06:44:25.646825] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.208 [2024-12-12 06:44:25.688651] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.208 [2024-12-12 06:44:25.688694] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.482 06:44:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:21.483 06:44:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:21.483 spdk_app_start Round 2 00:06:21.483 06:44:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1145422 /var/tmp/spdk-nbd.sock 00:06:21.483 06:44:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1145422 ']' 00:06:21.483 06:44:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.483 06:44:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.483 06:44:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.483 06:44:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.483 06:44:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.483 06:44:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.483 06:44:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:21.483 06:44:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.483 Malloc0 00:06:21.483 06:44:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.741 Malloc1 00:06:21.741 06:44:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.741 06:44:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.741 06:44:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.741 06:44:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.741 06:44:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.741 06:44:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.741 06:44:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.741 06:44:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.741 06:44:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.741 06:44:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.741 06:44:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.741 06:44:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.741 06:44:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:21.741 06:44:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.741 06:44:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.741 06:44:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.741 /dev/nbd0 00:06:21.741 06:44:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.741 06:44:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.741 06:44:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:21.741 06:44:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:21.741 06:44:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.741 06:44:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.741 06:44:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:21.999 06:44:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:21.999 06:44:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.999 06:44:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.999 06:44:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.999 1+0 records in 00:06:21.999 1+0 records out 00:06:21.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248704 s, 16.5 MB/s 00:06:21.999 06:44:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:21.999 06:44:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:21.999 06:44:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:21.999 06:44:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.999 06:44:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:21.999 06:44:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.999 06:44:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.999 06:44:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.999 /dev/nbd1 00:06:21.999 06:44:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.999 06:44:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.999 06:44:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:21.999 06:44:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:21.999 06:44:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.999 06:44:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.000 06:44:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:22.000 06:44:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:22.000 06:44:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.000 06:44:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.000 06:44:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.000 1+0 records in 00:06:22.000 1+0 records out 00:06:22.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273964 s, 15.0 MB/s 00:06:22.000 06:44:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:22.258 06:44:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:22.258 06:44:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:22.258 06:44:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.258 06:44:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.258 { 00:06:22.258 "nbd_device": "/dev/nbd0", 00:06:22.258 "bdev_name": "Malloc0" 00:06:22.258 }, 00:06:22.258 { 00:06:22.258 "nbd_device": "/dev/nbd1", 00:06:22.258 "bdev_name": "Malloc1" 00:06:22.258 } 00:06:22.258 ]' 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.258 { 00:06:22.258 "nbd_device": "/dev/nbd0", 00:06:22.258 "bdev_name": "Malloc0" 00:06:22.258 }, 00:06:22.258 { 00:06:22.258 "nbd_device": "/dev/nbd1", 00:06:22.258 "bdev_name": "Malloc1" 00:06:22.258 } 00:06:22.258 ]' 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.258 /dev/nbd1' 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.258 /dev/nbd1' 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.258 06:44:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:22.516 256+0 records in 00:06:22.516 256+0 records out 00:06:22.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110662 s, 94.8 MB/s 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:22.516 256+0 records in 00:06:22.516 256+0 records out 00:06:22.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020541 s, 51.0 MB/s 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:22.516 256+0 records in 00:06:22.516 256+0 records out 00:06:22.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225958 s, 46.4 MB/s 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.516 06:44:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.774 06:44:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.774 06:44:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.775 06:44:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.775 06:44:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.775 06:44:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.775 06:44:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.775 06:44:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.775 06:44:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.775 06:44:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.775 06:44:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:22.775 06:44:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.033 06:44:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.033 06:44:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:23.291 06:44:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:23.549 [2024-12-12 06:44:30.913374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.549 [2024-12-12 06:44:30.949482] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.549 [2024-12-12 06:44:30.949485] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.549 [2024-12-12 06:44:30.990522] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.549 [2024-12-12 06:44:30.990566] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:26.831 06:44:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1145422 /var/tmp/spdk-nbd.sock 00:06:26.831 06:44:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1145422 ']' 00:06:26.831 06:44:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.831 06:44:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.831 06:44:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.831 06:44:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.831 06:44:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.831 06:44:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.831 06:44:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:26.831 06:44:33 event.app_repeat -- event/event.sh@39 -- # killprocess 1145422 00:06:26.831 06:44:33 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1145422 ']' 00:06:26.831 06:44:33 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1145422 00:06:26.831 06:44:33 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:26.831 06:44:33 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.831 06:44:33 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1145422 00:06:26.831 06:44:34 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.831 06:44:34 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.831 06:44:34 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1145422' 00:06:26.831 killing process with pid 1145422 00:06:26.831 06:44:34 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1145422 00:06:26.831 06:44:34 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1145422 00:06:26.831 spdk_app_start is called in Round 0. 00:06:26.831 Shutdown signal received, stop current app iteration 00:06:26.831 Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 reinitialization... 00:06:26.831 spdk_app_start is called in Round 1. 00:06:26.831 Shutdown signal received, stop current app iteration 00:06:26.831 Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 reinitialization... 00:06:26.831 spdk_app_start is called in Round 2. 00:06:26.831 Shutdown signal received, stop current app iteration 00:06:26.831 Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 reinitialization... 00:06:26.831 spdk_app_start is called in Round 3. 00:06:26.831 Shutdown signal received, stop current app iteration 00:06:26.831 06:44:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:26.831 06:44:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:26.831 00:06:26.831 real 0m16.276s 00:06:26.831 user 0m35.067s 00:06:26.831 sys 0m3.200s 00:06:26.831 06:44:34 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.831 06:44:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.831 ************************************ 00:06:26.831 END TEST app_repeat 00:06:26.831 ************************************ 00:06:26.831 06:44:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:26.831 06:44:34 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:26.831 06:44:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.831 06:44:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.831 06:44:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.831 ************************************ 00:06:26.831 START TEST cpu_locks 00:06:26.831 ************************************ 00:06:26.831 06:44:34 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:26.831 * Looking for test storage... 00:06:26.831 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:26.831 06:44:34 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.831 06:44:34 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.831 06:44:34 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:27.089 06:44:34 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.089 06:44:34 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:27.089 06:44:34 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.089 06:44:34 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:27.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.089 --rc genhtml_branch_coverage=1 00:06:27.089 --rc genhtml_function_coverage=1 00:06:27.089 --rc genhtml_legend=1 00:06:27.089 --rc geninfo_all_blocks=1 00:06:27.089 --rc geninfo_unexecuted_blocks=1 00:06:27.089 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:27.089 ' 00:06:27.089 06:44:34 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:27.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.089 --rc genhtml_branch_coverage=1 00:06:27.089 --rc genhtml_function_coverage=1 00:06:27.089 --rc genhtml_legend=1 00:06:27.089 --rc geninfo_all_blocks=1 00:06:27.089 --rc geninfo_unexecuted_blocks=1 00:06:27.089 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:27.089 ' 00:06:27.089 06:44:34 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:27.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.089 --rc genhtml_branch_coverage=1 00:06:27.089 --rc genhtml_function_coverage=1 00:06:27.089 --rc genhtml_legend=1 00:06:27.089 --rc geninfo_all_blocks=1 00:06:27.089 --rc geninfo_unexecuted_blocks=1 00:06:27.089 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:27.089 ' 00:06:27.089 06:44:34 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:27.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.089 --rc genhtml_branch_coverage=1 00:06:27.089 --rc genhtml_function_coverage=1 00:06:27.089 --rc genhtml_legend=1 00:06:27.089 --rc geninfo_all_blocks=1 00:06:27.089 --rc geninfo_unexecuted_blocks=1 00:06:27.089 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:27.089 ' 00:06:27.089 06:44:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:27.089 06:44:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:27.089 06:44:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:27.089 06:44:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:27.089 06:44:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.089 06:44:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.089 06:44:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.089 ************************************ 00:06:27.089 START TEST default_locks 00:06:27.089 ************************************ 00:06:27.090 06:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:27.090 06:44:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.090 06:44:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1148421 00:06:27.090 06:44:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1148421 00:06:27.090 06:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1148421 ']' 00:06:27.090 06:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.090 06:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.090 06:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.090 06:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.090 06:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.090 [2024-12-12 06:44:34.451604] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:27.090 [2024-12-12 06:44:34.451647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148421 ] 00:06:27.090 [2024-12-12 06:44:34.516040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.090 [2024-12-12 06:44:34.558898] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.347 06:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.347 06:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:27.347 06:44:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1148421 00:06:27.347 06:44:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1148421 00:06:27.347 06:44:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.604 lslocks: write error 00:06:27.605 06:44:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1148421 00:06:27.605 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1148421 ']' 00:06:27.605 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1148421 00:06:27.605 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:27.605 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.605 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1148421 00:06:27.862 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.862 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.862 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1148421' 00:06:27.862 killing process with pid 1148421 00:06:27.862 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1148421 00:06:27.862 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1148421 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1148421 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1148421 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1148421 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1148421 ']' 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.119 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1148421) - No such process 00:06:28.119 ERROR: process (pid: 1148421) is no longer running 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:28.119 00:06:28.119 real 0m1.006s 00:06:28.119 user 0m0.971s 00:06:28.119 sys 0m0.460s 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.119 06:44:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.119 ************************************ 00:06:28.119 END TEST default_locks 00:06:28.119 ************************************ 00:06:28.119 06:44:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:28.119 06:44:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.119 06:44:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.119 06:44:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.119 ************************************ 00:06:28.119 START TEST default_locks_via_rpc 00:06:28.119 ************************************ 00:06:28.119 06:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:28.119 06:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.119 06:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1148666 00:06:28.119 06:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1148666 00:06:28.119 06:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1148666 ']' 00:06:28.119 06:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.119 06:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.119 06:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.119 06:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.119 06:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.119 [2024-12-12 06:44:35.528343] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:28.119 [2024-12-12 06:44:35.528385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148666 ] 00:06:28.120 [2024-12-12 06:44:35.595459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.120 [2024-12-12 06:44:35.639512] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.377 06:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.377 06:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:28.377 06:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:28.377 06:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.377 06:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.377 06:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.377 06:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:28.377 06:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:28.377 06:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:28.377 06:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:28.377 06:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:28.377 06:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.377 06:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.377 06:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.377 06:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1148666 00:06:28.377 06:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1148666 00:06:28.377 06:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.635 06:44:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1148666 00:06:28.635 06:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1148666 ']' 00:06:28.635 06:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1148666 00:06:28.635 06:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:28.635 06:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.635 06:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1148666 00:06:28.635 06:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.635 06:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.635 06:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1148666' 00:06:28.635 killing process with pid 1148666 00:06:28.635 06:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1148666 00:06:28.635 06:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1148666 00:06:29.200 00:06:29.200 real 0m0.930s 00:06:29.200 user 0m0.899s 00:06:29.200 sys 0m0.453s 00:06:29.200 06:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.200 06:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.200 ************************************ 00:06:29.200 END TEST default_locks_via_rpc 00:06:29.200 ************************************ 00:06:29.200 06:44:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:29.200 06:44:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.200 06:44:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.200 06:44:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.200 ************************************ 00:06:29.200 START TEST non_locking_app_on_locked_coremask 00:06:29.200 ************************************ 00:06:29.200 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:29.200 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1148955 00:06:29.200 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1148955 /var/tmp/spdk.sock 00:06:29.200 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1148955 ']' 00:06:29.200 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.200 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.200 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.200 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.200 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.200 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.200 [2024-12-12 06:44:36.546233] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:29.200 [2024-12-12 06:44:36.546288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148955 ] 00:06:29.200 [2024-12-12 06:44:36.616269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.200 [2024-12-12 06:44:36.658314] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.459 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.459 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:29.459 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:29.459 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1148959 00:06:29.459 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1148959 /var/tmp/spdk2.sock 00:06:29.459 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1148959 ']' 00:06:29.459 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.459 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.459 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.459 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.459 06:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.459 [2024-12-12 06:44:36.878957] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:29.459 [2024-12-12 06:44:36.879010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148959 ] 00:06:29.459 [2024-12-12 06:44:36.972976] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.459 [2024-12-12 06:44:36.972998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.716 [2024-12-12 06:44:37.052554] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.281 06:44:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.281 06:44:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:30.281 06:44:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1148955 00:06:30.281 06:44:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1148955 00:06:30.281 06:44:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.655 lslocks: write error 00:06:31.655 06:44:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1148955 00:06:31.655 06:44:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1148955 ']' 00:06:31.655 06:44:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1148955 00:06:31.655 06:44:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:31.655 06:44:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.655 06:44:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1148955 00:06:31.655 06:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.655 06:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.655 06:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1148955' 00:06:31.655 killing process with pid 1148955 00:06:31.655 06:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1148955 00:06:31.655 06:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1148955 00:06:32.220 06:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1148959 00:06:32.220 06:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1148959 ']' 00:06:32.220 06:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1148959 00:06:32.220 06:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:32.220 06:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.220 06:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1148959 00:06:32.220 06:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.220 06:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.220 06:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1148959' 00:06:32.220 killing process with pid 1148959 00:06:32.220 06:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1148959 00:06:32.220 06:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1148959 00:06:32.479 00:06:32.479 real 0m3.436s 00:06:32.479 user 0m3.644s 00:06:32.479 sys 0m1.309s 00:06:32.479 06:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.479 06:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.479 ************************************ 00:06:32.479 END TEST non_locking_app_on_locked_coremask 00:06:32.479 ************************************ 00:06:32.479 06:44:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:32.479 06:44:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.479 06:44:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.479 06:44:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.737 ************************************ 00:06:32.737 START TEST locking_app_on_unlocked_coremask 00:06:32.737 ************************************ 00:06:32.737 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:32.737 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1149526 00:06:32.737 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1149526 /var/tmp/spdk.sock 00:06:32.737 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:32.737 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1149526 ']' 00:06:32.737 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.737 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.737 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.737 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.737 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.737 [2024-12-12 06:44:40.066677] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:32.737 [2024-12-12 06:44:40.066741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1149526 ] 00:06:32.737 [2024-12-12 06:44:40.137965] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.737 [2024-12-12 06:44:40.137990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.737 [2024-12-12 06:44:40.178560] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.995 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.995 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:32.995 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1149533 00:06:32.995 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1149533 /var/tmp/spdk2.sock 00:06:32.995 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:32.995 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1149533 ']' 00:06:32.995 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.995 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.995 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.995 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.995 06:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.995 [2024-12-12 06:44:40.410394] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:32.995 [2024-12-12 06:44:40.410479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1149533 ] 00:06:32.995 [2024-12-12 06:44:40.514960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.253 [2024-12-12 06:44:40.595777] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.818 06:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.818 06:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:33.818 06:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1149533 00:06:33.818 06:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1149533 00:06:33.818 06:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.383 lslocks: write error 00:06:34.383 06:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1149526 00:06:34.383 06:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1149526 ']' 00:06:34.383 06:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1149526 00:06:34.383 06:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:34.383 06:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.383 06:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1149526 00:06:34.383 06:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.383 06:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.383 06:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1149526' 00:06:34.383 killing process with pid 1149526 00:06:34.383 06:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1149526 00:06:34.383 06:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1149526 00:06:35.315 06:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1149533 00:06:35.315 06:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1149533 ']' 00:06:35.315 06:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1149533 00:06:35.315 06:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:35.315 06:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.315 06:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1149533 00:06:35.315 06:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.315 06:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.315 06:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1149533' 00:06:35.315 killing process with pid 1149533 00:06:35.315 06:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1149533 00:06:35.315 06:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1149533 00:06:35.573 00:06:35.573 real 0m2.801s 00:06:35.573 user 0m2.958s 00:06:35.573 sys 0m1.025s 00:06:35.573 06:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.573 06:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.573 ************************************ 00:06:35.573 END TEST locking_app_on_unlocked_coremask 00:06:35.573 ************************************ 00:06:35.573 06:44:42 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:35.573 06:44:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.573 06:44:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.573 06:44:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.573 ************************************ 00:06:35.573 START TEST locking_app_on_locked_coremask 00:06:35.573 ************************************ 00:06:35.573 06:44:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:35.573 06:44:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1150096 00:06:35.573 06:44:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1150096 /var/tmp/spdk.sock 00:06:35.573 06:44:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.573 06:44:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1150096 ']' 00:06:35.573 06:44:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.573 06:44:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.573 06:44:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.573 06:44:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.573 06:44:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.573 [2024-12-12 06:44:42.941070] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:35.573 [2024-12-12 06:44:42.941124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1150096 ] 00:06:35.573 [2024-12-12 06:44:43.011539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.573 [2024-12-12 06:44:43.054030] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.831 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.831 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:35.831 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:35.831 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1150105 00:06:35.831 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1150105 /var/tmp/spdk2.sock 00:06:35.831 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:35.831 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1150105 /var/tmp/spdk2.sock 00:06:35.831 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:35.831 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.831 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:35.831 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.831 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1150105 /var/tmp/spdk2.sock 00:06:35.831 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1150105 ']' 00:06:35.831 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.831 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.831 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.831 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.831 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.831 [2024-12-12 06:44:43.282677] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:35.831 [2024-12-12 06:44:43.282759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1150105 ] 00:06:36.088 [2024-12-12 06:44:43.381037] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1150096 has claimed it. 00:06:36.088 [2024-12-12 06:44:43.381077] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:36.653 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1150105) - No such process 00:06:36.653 ERROR: process (pid: 1150105) is no longer running 00:06:36.653 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.653 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:36.653 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:36.653 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:36.653 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:36.653 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:36.653 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1150096 00:06:36.653 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1150096 00:06:36.653 06:44:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.911 lslocks: write error 00:06:36.911 06:44:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1150096 00:06:36.911 06:44:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1150096 ']' 00:06:36.911 06:44:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1150096 00:06:36.911 06:44:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:36.911 06:44:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.911 06:44:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1150096 00:06:36.911 06:44:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.911 06:44:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.911 06:44:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1150096' 00:06:36.911 killing process with pid 1150096 00:06:36.911 06:44:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1150096 00:06:36.911 06:44:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1150096 00:06:37.172 00:06:37.172 real 0m1.700s 00:06:37.172 user 0m1.814s 00:06:37.172 sys 0m0.596s 00:06:37.172 06:44:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.172 06:44:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.172 ************************************ 00:06:37.172 END TEST locking_app_on_locked_coremask 00:06:37.172 ************************************ 00:06:37.172 06:44:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:37.172 06:44:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.172 06:44:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.172 06:44:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.172 ************************************ 00:06:37.172 START TEST locking_overlapped_coremask 00:06:37.172 ************************************ 00:06:37.172 06:44:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:37.172 06:44:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:37.172 06:44:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1150403 00:06:37.172 06:44:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1150403 /var/tmp/spdk.sock 00:06:37.172 06:44:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1150403 ']' 00:06:37.172 06:44:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.172 06:44:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.172 06:44:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.172 06:44:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.172 06:44:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.575 [2024-12-12 06:44:44.697569] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:37.575 [2024-12-12 06:44:44.697609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1150403 ] 00:06:37.575 [2024-12-12 06:44:44.766635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.575 [2024-12-12 06:44:44.812538] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.575 [2024-12-12 06:44:44.812622] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.575 [2024-12-12 06:44:44.812631] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.575 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.575 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:37.575 06:44:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1150409 00:06:37.575 06:44:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1150409 /var/tmp/spdk2.sock 00:06:37.575 06:44:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:37.575 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:37.575 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1150409 /var/tmp/spdk2.sock 00:06:37.575 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:37.576 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.576 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:37.576 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.576 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1150409 /var/tmp/spdk2.sock 00:06:37.576 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1150409 ']' 00:06:37.576 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.576 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.576 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.576 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.576 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.576 [2024-12-12 06:44:45.042472] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:37.576 [2024-12-12 06:44:45.042545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1150409 ] 00:06:37.833 [2024-12-12 06:44:45.144221] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1150403 has claimed it. 00:06:37.833 [2024-12-12 06:44:45.144259] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:38.398 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1150409) - No such process 00:06:38.398 ERROR: process (pid: 1150409) is no longer running 00:06:38.398 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.398 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:38.398 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:38.398 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:38.398 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:38.398 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:38.398 06:44:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:38.398 06:44:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:38.398 06:44:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:38.398 06:44:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:38.398 06:44:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1150403 00:06:38.398 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1150403 ']' 00:06:38.398 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1150403 00:06:38.398 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:38.398 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.399 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1150403 00:06:38.399 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.399 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.399 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1150403' 00:06:38.399 killing process with pid 1150403 00:06:38.399 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1150403 00:06:38.399 06:44:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1150403 00:06:38.659 00:06:38.659 real 0m1.385s 00:06:38.659 user 0m3.888s 00:06:38.659 sys 0m0.396s 00:06:38.659 06:44:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.659 06:44:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.659 ************************************ 00:06:38.659 END TEST locking_overlapped_coremask 00:06:38.659 ************************************ 00:06:38.659 06:44:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:38.659 06:44:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.659 06:44:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.659 06:44:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.659 ************************************ 00:06:38.659 START TEST locking_overlapped_coremask_via_rpc 00:06:38.659 ************************************ 00:06:38.659 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:38.659 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:38.659 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1150702 00:06:38.659 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1150702 /var/tmp/spdk.sock 00:06:38.659 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1150702 ']' 00:06:38.659 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.659 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.659 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.659 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.659 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.659 [2024-12-12 06:44:46.146054] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:38.659 [2024-12-12 06:44:46.146094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1150702 ] 00:06:38.918 [2024-12-12 06:44:46.214315] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.918 [2024-12-12 06:44:46.214339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.918 [2024-12-12 06:44:46.260589] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.918 [2024-12-12 06:44:46.260683] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.918 [2024-12-12 06:44:46.260686] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.176 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.176 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:39.176 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1150710 00:06:39.176 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1150710 /var/tmp/spdk2.sock 00:06:39.176 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:39.176 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1150710 ']' 00:06:39.176 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.176 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.176 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.176 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.176 06:44:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.176 [2024-12-12 06:44:46.494488] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:39.176 [2024-12-12 06:44:46.494556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1150710 ] 00:06:39.176 [2024-12-12 06:44:46.595840] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.176 [2024-12-12 06:44:46.595870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.176 [2024-12-12 06:44:46.685796] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.176 [2024-12-12 06:44:46.689181] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:39.176 [2024-12-12 06:44:46.689182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.110 [2024-12-12 06:44:47.383239] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1150702 has claimed it. 00:06:40.110 request: 00:06:40.110 { 00:06:40.110 "method": "framework_enable_cpumask_locks", 00:06:40.110 "req_id": 1 00:06:40.110 } 00:06:40.110 Got JSON-RPC error response 00:06:40.110 response: 00:06:40.110 { 00:06:40.110 "code": -32603, 00:06:40.110 "message": "Failed to claim CPU core: 2" 00:06:40.110 } 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1150702 /var/tmp/spdk.sock 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1150702 ']' 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.110 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.111 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.111 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.111 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.111 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.111 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:40.111 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1150710 /var/tmp/spdk2.sock 00:06:40.111 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1150710 ']' 00:06:40.111 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.111 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.111 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.111 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.111 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.368 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.368 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:40.368 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:40.369 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:40.369 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:40.369 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:40.369 00:06:40.369 real 0m1.658s 00:06:40.369 user 0m0.800s 00:06:40.369 sys 0m0.154s 00:06:40.369 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.369 06:44:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.369 ************************************ 00:06:40.369 END TEST locking_overlapped_coremask_via_rpc 00:06:40.369 ************************************ 00:06:40.369 06:44:47 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:40.369 06:44:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1150702 ]] 00:06:40.369 06:44:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1150702 00:06:40.369 06:44:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1150702 ']' 00:06:40.369 06:44:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1150702 00:06:40.369 06:44:47 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:40.369 06:44:47 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.369 06:44:47 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1150702 00:06:40.627 06:44:47 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.627 06:44:47 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.627 06:44:47 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1150702' 00:06:40.627 killing process with pid 1150702 00:06:40.627 06:44:47 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1150702 00:06:40.627 06:44:47 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1150702 00:06:40.885 06:44:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1150710 ]] 00:06:40.885 06:44:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1150710 00:06:40.885 06:44:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1150710 ']' 00:06:40.885 06:44:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1150710 00:06:40.885 06:44:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:40.885 06:44:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.885 06:44:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1150710 00:06:40.885 06:44:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:40.885 06:44:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:40.885 06:44:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1150710' 00:06:40.885 killing process with pid 1150710 00:06:40.885 06:44:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1150710 00:06:40.885 06:44:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1150710 00:06:41.143 06:44:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:41.143 06:44:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:41.143 06:44:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1150702 ]] 00:06:41.143 06:44:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1150702 00:06:41.143 06:44:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1150702 ']' 00:06:41.143 06:44:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1150702 00:06:41.143 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1150702) - No such process 00:06:41.143 06:44:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1150702 is not found' 00:06:41.143 Process with pid 1150702 is not found 00:06:41.143 06:44:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1150710 ]] 00:06:41.143 06:44:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1150710 00:06:41.143 06:44:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1150710 ']' 00:06:41.143 06:44:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1150710 00:06:41.143 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1150710) - No such process 00:06:41.143 06:44:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1150710 is not found' 00:06:41.143 Process with pid 1150710 is not found 00:06:41.143 06:44:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:41.143 00:06:41.143 real 0m14.362s 00:06:41.143 user 0m24.807s 00:06:41.143 sys 0m5.437s 00:06:41.143 06:44:48 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.143 06:44:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.143 ************************************ 00:06:41.143 END TEST cpu_locks 00:06:41.143 ************************************ 00:06:41.143 00:06:41.143 real 0m38.956s 00:06:41.143 user 1m13.105s 00:06:41.143 sys 0m9.730s 00:06:41.143 06:44:48 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.143 06:44:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.143 ************************************ 00:06:41.143 END TEST event 00:06:41.143 ************************************ 00:06:41.402 06:44:48 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:41.402 06:44:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.402 06:44:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.402 06:44:48 -- common/autotest_common.sh@10 -- # set +x 00:06:41.402 ************************************ 00:06:41.402 START TEST thread 00:06:41.402 ************************************ 00:06:41.402 06:44:48 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:41.402 * Looking for test storage... 00:06:41.402 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:41.402 06:44:48 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:41.402 06:44:48 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:41.402 06:44:48 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:41.402 06:44:48 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:41.402 06:44:48 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.402 06:44:48 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.402 06:44:48 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.402 06:44:48 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.402 06:44:48 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.402 06:44:48 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.402 06:44:48 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.402 06:44:48 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.402 06:44:48 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.402 06:44:48 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.402 06:44:48 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.402 06:44:48 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:41.402 06:44:48 thread -- scripts/common.sh@345 -- # : 1 00:06:41.402 06:44:48 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.402 06:44:48 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.402 06:44:48 thread -- scripts/common.sh@365 -- # decimal 1 00:06:41.402 06:44:48 thread -- scripts/common.sh@353 -- # local d=1 00:06:41.402 06:44:48 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.402 06:44:48 thread -- scripts/common.sh@355 -- # echo 1 00:06:41.402 06:44:48 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.402 06:44:48 thread -- scripts/common.sh@366 -- # decimal 2 00:06:41.402 06:44:48 thread -- scripts/common.sh@353 -- # local d=2 00:06:41.402 06:44:48 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.402 06:44:48 thread -- scripts/common.sh@355 -- # echo 2 00:06:41.402 06:44:48 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.402 06:44:48 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.402 06:44:48 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.402 06:44:48 thread -- scripts/common.sh@368 -- # return 0 00:06:41.402 06:44:48 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.402 06:44:48 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:41.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.402 --rc genhtml_branch_coverage=1 00:06:41.402 --rc genhtml_function_coverage=1 00:06:41.402 --rc genhtml_legend=1 00:06:41.402 --rc geninfo_all_blocks=1 00:06:41.402 --rc geninfo_unexecuted_blocks=1 00:06:41.402 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.402 ' 00:06:41.402 06:44:48 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:41.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.402 --rc genhtml_branch_coverage=1 00:06:41.402 --rc genhtml_function_coverage=1 00:06:41.402 --rc genhtml_legend=1 00:06:41.402 --rc geninfo_all_blocks=1 00:06:41.402 --rc geninfo_unexecuted_blocks=1 00:06:41.402 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.402 ' 00:06:41.402 06:44:48 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:41.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.402 --rc genhtml_branch_coverage=1 00:06:41.402 --rc genhtml_function_coverage=1 00:06:41.402 --rc genhtml_legend=1 00:06:41.402 --rc geninfo_all_blocks=1 00:06:41.402 --rc geninfo_unexecuted_blocks=1 00:06:41.402 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.402 ' 00:06:41.402 06:44:48 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:41.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.402 --rc genhtml_branch_coverage=1 00:06:41.402 --rc genhtml_function_coverage=1 00:06:41.402 --rc genhtml_legend=1 00:06:41.402 --rc geninfo_all_blocks=1 00:06:41.402 --rc geninfo_unexecuted_blocks=1 00:06:41.402 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.402 ' 00:06:41.402 06:44:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:41.402 06:44:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:41.402 06:44:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.402 06:44:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.402 ************************************ 00:06:41.402 START TEST thread_poller_perf 00:06:41.403 ************************************ 00:06:41.403 06:44:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:41.661 [2024-12-12 06:44:48.940664] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:41.661 [2024-12-12 06:44:48.940750] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1151345 ] 00:06:41.661 [2024-12-12 06:44:49.014236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.661 [2024-12-12 06:44:49.053249] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.661 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:42.594 [2024-12-12T05:44:50.116Z] ====================================== 00:06:42.594 [2024-12-12T05:44:50.116Z] busy:2504109862 (cyc) 00:06:42.594 [2024-12-12T05:44:50.116Z] total_run_count: 860000 00:06:42.594 [2024-12-12T05:44:50.116Z] tsc_hz: 2500000000 (cyc) 00:06:42.594 [2024-12-12T05:44:50.116Z] ====================================== 00:06:42.594 [2024-12-12T05:44:50.116Z] poller_cost: 2911 (cyc), 1164 (nsec) 00:06:42.594 00:06:42.594 real 0m1.172s 00:06:42.594 user 0m1.095s 00:06:42.594 sys 0m0.074s 00:06:42.594 06:44:50 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.594 06:44:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:42.594 ************************************ 00:06:42.594 END TEST thread_poller_perf 00:06:42.594 ************************************ 00:06:42.853 06:44:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:42.853 06:44:50 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:42.853 06:44:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.853 06:44:50 thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.853 ************************************ 00:06:42.853 START TEST thread_poller_perf 00:06:42.853 ************************************ 00:06:42.853 06:44:50 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:42.853 [2024-12-12 06:44:50.193341] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:42.853 [2024-12-12 06:44:50.193437] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1151573 ] 00:06:42.853 [2024-12-12 06:44:50.268041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.853 [2024-12-12 06:44:50.308656] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.853 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:44.227 [2024-12-12T05:44:51.749Z] ====================================== 00:06:44.227 [2024-12-12T05:44:51.749Z] busy:2501372822 (cyc) 00:06:44.227 [2024-12-12T05:44:51.749Z] total_run_count: 11948000 00:06:44.227 [2024-12-12T05:44:51.749Z] tsc_hz: 2500000000 (cyc) 00:06:44.227 [2024-12-12T05:44:51.749Z] ====================================== 00:06:44.227 [2024-12-12T05:44:51.749Z] poller_cost: 209 (cyc), 83 (nsec) 00:06:44.227 00:06:44.227 real 0m1.168s 00:06:44.227 user 0m1.084s 00:06:44.227 sys 0m0.081s 00:06:44.227 06:44:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.227 06:44:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:44.227 ************************************ 00:06:44.227 END TEST thread_poller_perf 00:06:44.227 ************************************ 00:06:44.227 06:44:51 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:44.227 06:44:51 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:44.227 06:44:51 thread -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.227 06:44:51 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.227 06:44:51 thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.227 ************************************ 00:06:44.227 START TEST thread_spdk_lock 00:06:44.227 ************************************ 00:06:44.227 06:44:51 thread.thread_spdk_lock -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:44.227 [2024-12-12 06:44:51.439029] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:44.227 [2024-12-12 06:44:51.439133] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1151712 ] 00:06:44.227 [2024-12-12 06:44:51.512880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.227 [2024-12-12 06:44:51.553923] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.227 [2024-12-12 06:44:51.553926] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.793 [2024-12-12 06:44:52.044633] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 990:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:44.793 [2024-12-12 06:44:52.044667] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3214:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:44.793 [2024-12-12 06:44:52.044677] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3169:sspin_stacks_print: *ERROR*: spinlock 0x14e4380 00:06:44.793 [2024-12-12 06:44:52.045402] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 885:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:44.793 [2024-12-12 06:44:52.045506] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1051:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:44.793 [2024-12-12 06:44:52.045525] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 885:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:44.793 Starting test contend 00:06:44.793 Worker Delay Wait us Hold us Total us 00:06:44.793 0 3 173381 186740 360121 00:06:44.793 1 5 90042 286539 376581 00:06:44.793 PASS test contend 00:06:44.793 Starting test hold_by_poller 00:06:44.793 PASS test hold_by_poller 00:06:44.793 Starting test hold_by_message 00:06:44.793 PASS test hold_by_message 00:06:44.793 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:06:44.793 100014 assertions passed 00:06:44.793 0 assertions failed 00:06:44.793 00:06:44.793 real 0m0.657s 00:06:44.793 user 0m1.065s 00:06:44.793 sys 0m0.080s 00:06:44.793 06:44:52 thread.thread_spdk_lock -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.793 06:44:52 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:06:44.793 ************************************ 00:06:44.793 END TEST thread_spdk_lock 00:06:44.793 ************************************ 00:06:44.793 00:06:44.793 real 0m3.421s 00:06:44.793 user 0m3.424s 00:06:44.793 sys 0m0.509s 00:06:44.793 06:44:52 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.793 06:44:52 thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.793 ************************************ 00:06:44.793 END TEST thread 00:06:44.793 ************************************ 00:06:44.793 06:44:52 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:44.793 06:44:52 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:44.793 06:44:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.793 06:44:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.793 06:44:52 -- common/autotest_common.sh@10 -- # set +x 00:06:44.793 ************************************ 00:06:44.793 START TEST app_cmdline 00:06:44.793 ************************************ 00:06:44.793 06:44:52 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:44.793 * Looking for test storage... 00:06:44.793 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:44.793 06:44:52 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:44.793 06:44:52 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:44.793 06:44:52 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:45.052 06:44:52 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.052 06:44:52 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:45.052 06:44:52 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.052 06:44:52 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:45.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.052 --rc genhtml_branch_coverage=1 00:06:45.052 --rc genhtml_function_coverage=1 00:06:45.052 --rc genhtml_legend=1 00:06:45.052 --rc geninfo_all_blocks=1 00:06:45.052 --rc geninfo_unexecuted_blocks=1 00:06:45.052 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:45.052 ' 00:06:45.052 06:44:52 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:45.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.052 --rc genhtml_branch_coverage=1 00:06:45.052 --rc genhtml_function_coverage=1 00:06:45.052 --rc genhtml_legend=1 00:06:45.052 --rc geninfo_all_blocks=1 00:06:45.052 --rc geninfo_unexecuted_blocks=1 00:06:45.052 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:45.052 ' 00:06:45.052 06:44:52 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:45.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.052 --rc genhtml_branch_coverage=1 00:06:45.052 --rc genhtml_function_coverage=1 00:06:45.052 --rc genhtml_legend=1 00:06:45.052 --rc geninfo_all_blocks=1 00:06:45.052 --rc geninfo_unexecuted_blocks=1 00:06:45.052 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:45.052 ' 00:06:45.052 06:44:52 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:45.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.052 --rc genhtml_branch_coverage=1 00:06:45.052 --rc genhtml_function_coverage=1 00:06:45.052 --rc genhtml_legend=1 00:06:45.052 --rc geninfo_all_blocks=1 00:06:45.052 --rc geninfo_unexecuted_blocks=1 00:06:45.052 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:45.052 ' 00:06:45.052 06:44:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:45.052 06:44:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1151993 00:06:45.052 06:44:52 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:45.052 06:44:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1151993 00:06:45.052 06:44:52 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1151993 ']' 00:06:45.052 06:44:52 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.052 06:44:52 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.052 06:44:52 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.052 06:44:52 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.052 06:44:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.052 [2024-12-12 06:44:52.361083] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:45.052 [2024-12-12 06:44:52.361127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1151993 ] 00:06:45.052 [2024-12-12 06:44:52.429804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.052 [2024-12-12 06:44:52.473155] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.311 06:44:52 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.311 06:44:52 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:45.311 06:44:52 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:45.570 { 00:06:45.570 "version": "SPDK v25.01-pre git sha1 d58eef2a2", 00:06:45.570 "fields": { 00:06:45.570 "major": 25, 00:06:45.570 "minor": 1, 00:06:45.570 "patch": 0, 00:06:45.570 "suffix": "-pre", 00:06:45.570 "commit": "d58eef2a2" 00:06:45.570 } 00:06:45.570 } 00:06:45.570 06:44:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:45.570 06:44:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:45.570 06:44:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:45.570 06:44:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:45.570 06:44:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:45.570 06:44:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:45.570 06:44:52 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.570 06:44:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.570 06:44:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:45.570 06:44:52 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.570 06:44:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:45.570 06:44:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:45.570 06:44:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.570 06:44:52 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:45.570 06:44:52 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.570 06:44:52 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:45.570 06:44:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.570 06:44:52 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:45.570 06:44:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.570 06:44:52 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:45.570 06:44:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.570 06:44:52 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:45.570 06:44:52 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:45.570 06:44:52 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.570 request: 00:06:45.570 { 00:06:45.570 "method": "env_dpdk_get_mem_stats", 00:06:45.570 "req_id": 1 00:06:45.570 } 00:06:45.570 Got JSON-RPC error response 00:06:45.570 response: 00:06:45.570 { 00:06:45.570 "code": -32601, 00:06:45.570 "message": "Method not found" 00:06:45.570 } 00:06:45.570 06:44:53 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:45.570 06:44:53 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:45.570 06:44:53 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:45.570 06:44:53 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:45.570 06:44:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1151993 00:06:45.570 06:44:53 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1151993 ']' 00:06:45.570 06:44:53 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1151993 00:06:45.570 06:44:53 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:45.570 06:44:53 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.570 06:44:53 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1151993 00:06:45.829 06:44:53 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.829 06:44:53 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.829 06:44:53 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1151993' 00:06:45.829 killing process with pid 1151993 00:06:45.829 06:44:53 app_cmdline -- common/autotest_common.sh@973 -- # kill 1151993 00:06:45.829 06:44:53 app_cmdline -- common/autotest_common.sh@978 -- # wait 1151993 00:06:46.088 00:06:46.088 real 0m1.246s 00:06:46.088 user 0m1.430s 00:06:46.088 sys 0m0.454s 00:06:46.088 06:44:53 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.088 06:44:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:46.088 ************************************ 00:06:46.088 END TEST app_cmdline 00:06:46.088 ************************************ 00:06:46.088 06:44:53 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:46.088 06:44:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.088 06:44:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.088 06:44:53 -- common/autotest_common.sh@10 -- # set +x 00:06:46.088 ************************************ 00:06:46.088 START TEST version 00:06:46.088 ************************************ 00:06:46.088 06:44:53 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:46.347 * Looking for test storage... 00:06:46.347 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:46.347 06:44:53 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.347 06:44:53 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.347 06:44:53 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.347 06:44:53 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.347 06:44:53 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.347 06:44:53 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.347 06:44:53 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.347 06:44:53 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.347 06:44:53 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.347 06:44:53 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.347 06:44:53 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.347 06:44:53 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.347 06:44:53 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.347 06:44:53 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.347 06:44:53 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.347 06:44:53 version -- scripts/common.sh@344 -- # case "$op" in 00:06:46.347 06:44:53 version -- scripts/common.sh@345 -- # : 1 00:06:46.347 06:44:53 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.347 06:44:53 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.347 06:44:53 version -- scripts/common.sh@365 -- # decimal 1 00:06:46.347 06:44:53 version -- scripts/common.sh@353 -- # local d=1 00:06:46.348 06:44:53 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.348 06:44:53 version -- scripts/common.sh@355 -- # echo 1 00:06:46.348 06:44:53 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.348 06:44:53 version -- scripts/common.sh@366 -- # decimal 2 00:06:46.348 06:44:53 version -- scripts/common.sh@353 -- # local d=2 00:06:46.348 06:44:53 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.348 06:44:53 version -- scripts/common.sh@355 -- # echo 2 00:06:46.348 06:44:53 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.348 06:44:53 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.348 06:44:53 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.348 06:44:53 version -- scripts/common.sh@368 -- # return 0 00:06:46.348 06:44:53 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.348 06:44:53 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.348 --rc genhtml_branch_coverage=1 00:06:46.348 --rc genhtml_function_coverage=1 00:06:46.348 --rc genhtml_legend=1 00:06:46.348 --rc geninfo_all_blocks=1 00:06:46.348 --rc geninfo_unexecuted_blocks=1 00:06:46.348 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.348 ' 00:06:46.348 06:44:53 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.348 --rc genhtml_branch_coverage=1 00:06:46.348 --rc genhtml_function_coverage=1 00:06:46.348 --rc genhtml_legend=1 00:06:46.348 --rc geninfo_all_blocks=1 00:06:46.348 --rc geninfo_unexecuted_blocks=1 00:06:46.348 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.348 ' 00:06:46.348 06:44:53 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.348 --rc genhtml_branch_coverage=1 00:06:46.348 --rc genhtml_function_coverage=1 00:06:46.348 --rc genhtml_legend=1 00:06:46.348 --rc geninfo_all_blocks=1 00:06:46.348 --rc geninfo_unexecuted_blocks=1 00:06:46.348 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.348 ' 00:06:46.348 06:44:53 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.348 --rc genhtml_branch_coverage=1 00:06:46.348 --rc genhtml_function_coverage=1 00:06:46.348 --rc genhtml_legend=1 00:06:46.348 --rc geninfo_all_blocks=1 00:06:46.348 --rc geninfo_unexecuted_blocks=1 00:06:46.348 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.348 ' 00:06:46.348 06:44:53 version -- app/version.sh@17 -- # get_header_version major 00:06:46.348 06:44:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:46.348 06:44:53 version -- app/version.sh@14 -- # cut -f2 00:06:46.348 06:44:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.348 06:44:53 version -- app/version.sh@17 -- # major=25 00:06:46.348 06:44:53 version -- app/version.sh@18 -- # get_header_version minor 00:06:46.348 06:44:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:46.348 06:44:53 version -- app/version.sh@14 -- # cut -f2 00:06:46.348 06:44:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.348 06:44:53 version -- app/version.sh@18 -- # minor=1 00:06:46.348 06:44:53 version -- app/version.sh@19 -- # get_header_version patch 00:06:46.348 06:44:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:46.348 06:44:53 version -- app/version.sh@14 -- # cut -f2 00:06:46.348 06:44:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.348 06:44:53 version -- app/version.sh@19 -- # patch=0 00:06:46.348 06:44:53 version -- app/version.sh@20 -- # get_header_version suffix 00:06:46.348 06:44:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:46.348 06:44:53 version -- app/version.sh@14 -- # cut -f2 00:06:46.348 06:44:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.348 06:44:53 version -- app/version.sh@20 -- # suffix=-pre 00:06:46.348 06:44:53 version -- app/version.sh@22 -- # version=25.1 00:06:46.348 06:44:53 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:46.348 06:44:53 version -- app/version.sh@28 -- # version=25.1rc0 00:06:46.348 06:44:53 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:46.348 06:44:53 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:46.348 06:44:53 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:46.348 06:44:53 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:46.348 00:06:46.348 real 0m0.269s 00:06:46.348 user 0m0.153s 00:06:46.348 sys 0m0.171s 00:06:46.348 06:44:53 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.348 06:44:53 version -- common/autotest_common.sh@10 -- # set +x 00:06:46.348 ************************************ 00:06:46.348 END TEST version 00:06:46.348 ************************************ 00:06:46.348 06:44:53 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:46.348 06:44:53 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:46.348 06:44:53 -- spdk/autotest.sh@194 -- # uname -s 00:06:46.348 06:44:53 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:46.348 06:44:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:46.348 06:44:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:46.348 06:44:53 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:46.348 06:44:53 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:46.348 06:44:53 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:46.348 06:44:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:46.348 06:44:53 -- common/autotest_common.sh@10 -- # set +x 00:06:46.607 06:44:53 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:46.607 06:44:53 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:46.607 06:44:53 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:06:46.607 06:44:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:06:46.607 06:44:53 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:06:46.607 06:44:53 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:06:46.607 06:44:53 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:06:46.607 06:44:53 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:06:46.607 06:44:53 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:06:46.607 06:44:53 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:06:46.607 06:44:53 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:06:46.607 06:44:53 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:06:46.607 06:44:53 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:06:46.607 06:44:53 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:06:46.607 06:44:53 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:06:46.607 06:44:53 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:06:46.607 06:44:53 -- spdk/autotest.sh@374 -- # [[ 1 -eq 1 ]] 00:06:46.607 06:44:53 -- spdk/autotest.sh@375 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:46.607 06:44:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.607 06:44:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.607 06:44:53 -- common/autotest_common.sh@10 -- # set +x 00:06:46.607 ************************************ 00:06:46.607 START TEST llvm_fuzz 00:06:46.607 ************************************ 00:06:46.607 06:44:53 llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:46.607 * Looking for test storage... 00:06:46.607 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:46.607 06:44:54 llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.607 06:44:54 llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.607 06:44:54 llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.607 06:44:54 llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.607 06:44:54 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:46.607 06:44:54 llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.608 06:44:54 llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.608 --rc genhtml_branch_coverage=1 00:06:46.608 --rc genhtml_function_coverage=1 00:06:46.608 --rc genhtml_legend=1 00:06:46.608 --rc geninfo_all_blocks=1 00:06:46.608 --rc geninfo_unexecuted_blocks=1 00:06:46.608 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.608 ' 00:06:46.608 06:44:54 llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.608 --rc genhtml_branch_coverage=1 00:06:46.608 --rc genhtml_function_coverage=1 00:06:46.608 --rc genhtml_legend=1 00:06:46.608 --rc geninfo_all_blocks=1 00:06:46.608 --rc geninfo_unexecuted_blocks=1 00:06:46.608 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.608 ' 00:06:46.608 06:44:54 llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.608 --rc genhtml_branch_coverage=1 00:06:46.608 --rc genhtml_function_coverage=1 00:06:46.608 --rc genhtml_legend=1 00:06:46.608 --rc geninfo_all_blocks=1 00:06:46.608 --rc geninfo_unexecuted_blocks=1 00:06:46.608 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.608 ' 00:06:46.608 06:44:54 llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.608 --rc genhtml_branch_coverage=1 00:06:46.608 --rc genhtml_function_coverage=1 00:06:46.608 --rc genhtml_legend=1 00:06:46.608 --rc geninfo_all_blocks=1 00:06:46.608 --rc geninfo_unexecuted_blocks=1 00:06:46.608 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.608 ' 00:06:46.608 06:44:54 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:46.608 06:44:54 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:46.608 06:44:54 llvm_fuzz -- common/autotest_common.sh@550 -- # fuzzers=() 00:06:46.608 06:44:54 llvm_fuzz -- common/autotest_common.sh@550 -- # local fuzzers 00:06:46.608 06:44:54 llvm_fuzz -- common/autotest_common.sh@552 -- # [[ -n '' ]] 00:06:46.608 06:44:54 llvm_fuzz -- common/autotest_common.sh@555 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:46.608 06:44:54 llvm_fuzz -- common/autotest_common.sh@556 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:46.608 06:44:54 llvm_fuzz -- common/autotest_common.sh@559 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:46.608 06:44:54 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:46.608 06:44:54 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:46.608 06:44:54 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:46.608 06:44:54 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:46.608 06:44:54 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:46.608 06:44:54 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:46.608 06:44:54 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:46.608 06:44:54 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:46.608 06:44:54 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:46.608 06:44:54 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.608 06:44:54 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.608 06:44:54 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:46.868 ************************************ 00:06:46.868 START TEST nvmf_llvm_fuzz 00:06:46.868 ************************************ 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:46.868 * Looking for test storage... 00:06:46.868 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.868 --rc genhtml_branch_coverage=1 00:06:46.868 --rc genhtml_function_coverage=1 00:06:46.868 --rc genhtml_legend=1 00:06:46.868 --rc geninfo_all_blocks=1 00:06:46.868 --rc geninfo_unexecuted_blocks=1 00:06:46.868 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.868 ' 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.868 --rc genhtml_branch_coverage=1 00:06:46.868 --rc genhtml_function_coverage=1 00:06:46.868 --rc genhtml_legend=1 00:06:46.868 --rc geninfo_all_blocks=1 00:06:46.868 --rc geninfo_unexecuted_blocks=1 00:06:46.868 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.868 ' 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.868 --rc genhtml_branch_coverage=1 00:06:46.868 --rc genhtml_function_coverage=1 00:06:46.868 --rc genhtml_legend=1 00:06:46.868 --rc geninfo_all_blocks=1 00:06:46.868 --rc geninfo_unexecuted_blocks=1 00:06:46.868 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.868 ' 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.868 --rc genhtml_branch_coverage=1 00:06:46.868 --rc genhtml_function_coverage=1 00:06:46.868 --rc genhtml_legend=1 00:06:46.868 --rc geninfo_all_blocks=1 00:06:46.868 --rc geninfo_unexecuted_blocks=1 00:06:46.868 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.868 ' 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:46.868 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:46.869 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:46.869 #define SPDK_CONFIG_H 00:06:46.869 #define SPDK_CONFIG_AIO_FSDEV 1 00:06:46.869 #define SPDK_CONFIG_APPS 1 00:06:46.869 #define SPDK_CONFIG_ARCH native 00:06:46.869 #undef SPDK_CONFIG_ASAN 00:06:46.869 #undef SPDK_CONFIG_AVAHI 00:06:46.869 #undef SPDK_CONFIG_CET 00:06:46.869 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:06:46.869 #define SPDK_CONFIG_COVERAGE 1 00:06:46.869 #define SPDK_CONFIG_CROSS_PREFIX 00:06:46.869 #undef SPDK_CONFIG_CRYPTO 00:06:46.869 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:46.869 #undef SPDK_CONFIG_CUSTOMOCF 00:06:46.869 #undef SPDK_CONFIG_DAOS 00:06:46.869 #define SPDK_CONFIG_DAOS_DIR 00:06:46.869 #define SPDK_CONFIG_DEBUG 1 00:06:46.869 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:46.869 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:46.869 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:46.869 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:46.869 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:46.869 #undef SPDK_CONFIG_DPDK_UADK 00:06:46.869 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:46.869 #define SPDK_CONFIG_EXAMPLES 1 00:06:46.869 #undef SPDK_CONFIG_FC 00:06:46.869 #define SPDK_CONFIG_FC_PATH 00:06:46.869 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:46.869 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:46.869 #define SPDK_CONFIG_FSDEV 1 00:06:46.869 #undef SPDK_CONFIG_FUSE 00:06:46.869 #define SPDK_CONFIG_FUZZER 1 00:06:46.869 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:46.869 #undef SPDK_CONFIG_GOLANG 00:06:46.869 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:46.869 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:46.869 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:46.869 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:46.869 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:46.869 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:46.869 #undef SPDK_CONFIG_HAVE_LZ4 00:06:46.869 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:06:46.869 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:06:46.869 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:46.869 #define SPDK_CONFIG_IDXD 1 00:06:46.869 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:46.869 #undef SPDK_CONFIG_IPSEC_MB 00:06:46.869 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:46.869 #define SPDK_CONFIG_ISAL 1 00:06:46.869 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:46.869 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:46.869 #define SPDK_CONFIG_LIBDIR 00:06:46.869 #undef SPDK_CONFIG_LTO 00:06:46.869 #define SPDK_CONFIG_MAX_LCORES 128 00:06:46.869 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:06:46.869 #define SPDK_CONFIG_NVME_CUSE 1 00:06:46.869 #undef SPDK_CONFIG_OCF 00:06:46.869 #define SPDK_CONFIG_OCF_PATH 00:06:46.869 #define SPDK_CONFIG_OPENSSL_PATH 00:06:46.869 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:46.869 #define SPDK_CONFIG_PGO_DIR 00:06:46.869 #undef SPDK_CONFIG_PGO_USE 00:06:46.869 #define SPDK_CONFIG_PREFIX /usr/local 00:06:46.869 #undef SPDK_CONFIG_RAID5F 00:06:46.869 #undef SPDK_CONFIG_RBD 00:06:46.869 #define SPDK_CONFIG_RDMA 1 00:06:46.869 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:46.869 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:46.869 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:46.869 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:46.869 #undef SPDK_CONFIG_SHARED 00:06:46.869 #undef SPDK_CONFIG_SMA 00:06:46.869 #define SPDK_CONFIG_TESTS 1 00:06:46.869 #undef SPDK_CONFIG_TSAN 00:06:46.869 #define SPDK_CONFIG_UBLK 1 00:06:46.870 #define SPDK_CONFIG_UBSAN 1 00:06:46.870 #undef SPDK_CONFIG_UNIT_TESTS 00:06:46.870 #undef SPDK_CONFIG_URING 00:06:46.870 #define SPDK_CONFIG_URING_PATH 00:06:46.870 #undef SPDK_CONFIG_URING_ZNS 00:06:46.870 #undef SPDK_CONFIG_USDT 00:06:46.870 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:46.870 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:46.870 #define SPDK_CONFIG_VFIO_USER 1 00:06:46.870 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:46.870 #define SPDK_CONFIG_VHOST 1 00:06:46.870 #define SPDK_CONFIG_VIRTIO 1 00:06:46.870 #undef SPDK_CONFIG_VTUNE 00:06:46.870 #define SPDK_CONFIG_VTUNE_DIR 00:06:46.870 #define SPDK_CONFIG_WERROR 1 00:06:46.870 #define SPDK_CONFIG_WPDK_DIR 00:06:46.870 #undef SPDK_CONFIG_XNVME 00:06:46.870 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 1 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:46.870 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:47.131 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:47.131 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:47.131 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:47.131 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:47.131 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:47.131 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:47.131 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:47.131 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:47.131 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:47.132 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 1152437 ]] 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 1152437 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.UXMzQn 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.UXMzQn/tests/nvmf /tmp/spdk.UXMzQn 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=785162240 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=4499267584 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=54088605696 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730570240 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=7641964544 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30861856768 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865285120 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:47.133 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=12340113408 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346114048 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=6000640 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30865084416 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865285120 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=200704 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=6173044736 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173057024 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:06:47.134 * Looking for test storage... 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=54088605696 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=9856557056 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:47.134 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1698 -- # set -o errtrace 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1703 -- # true 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # xtrace_fd 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:47.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.134 --rc genhtml_branch_coverage=1 00:06:47.134 --rc genhtml_function_coverage=1 00:06:47.134 --rc genhtml_legend=1 00:06:47.134 --rc geninfo_all_blocks=1 00:06:47.134 --rc geninfo_unexecuted_blocks=1 00:06:47.134 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:47.134 ' 00:06:47.134 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:47.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.134 --rc genhtml_branch_coverage=1 00:06:47.134 --rc genhtml_function_coverage=1 00:06:47.134 --rc genhtml_legend=1 00:06:47.134 --rc geninfo_all_blocks=1 00:06:47.135 --rc geninfo_unexecuted_blocks=1 00:06:47.135 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:47.135 ' 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:47.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.135 --rc genhtml_branch_coverage=1 00:06:47.135 --rc genhtml_function_coverage=1 00:06:47.135 --rc genhtml_legend=1 00:06:47.135 --rc geninfo_all_blocks=1 00:06:47.135 --rc geninfo_unexecuted_blocks=1 00:06:47.135 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:47.135 ' 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:47.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.135 --rc genhtml_branch_coverage=1 00:06:47.135 --rc genhtml_function_coverage=1 00:06:47.135 --rc genhtml_legend=1 00:06:47.135 --rc geninfo_all_blocks=1 00:06:47.135 --rc geninfo_unexecuted_blocks=1 00:06:47.135 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:47.135 ' 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:47.135 06:44:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:47.135 [2024-12-12 06:44:54.578500] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:47.135 [2024-12-12 06:44:54.578588] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1152573 ] 00:06:47.394 [2024-12-12 06:44:54.774848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.394 [2024-12-12 06:44:54.809367] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.394 [2024-12-12 06:44:54.868434] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.394 [2024-12-12 06:44:54.884746] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:47.394 INFO: Running with entropic power schedule (0xFF, 100). 00:06:47.394 INFO: Seed: 3462211851 00:06:47.652 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:06:47.652 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:06:47.652 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:47.652 INFO: A corpus is not provided, starting from an empty corpus 00:06:47.652 #2 INITED exec/s: 0 rss: 65Mb 00:06:47.652 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:47.652 This may also happen if the target rejected all inputs we tried so far 00:06:47.652 [2024-12-12 06:44:54.930028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.652 [2024-12-12 06:44:54.930057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.909 NEW_FUNC[1/714]: 0x43bbe8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:47.909 NEW_FUNC[2/714]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:47.909 #10 NEW cov: 12102 ft: 12102 corp: 2/109b lim: 320 exec/s: 0 rss: 71Mb L: 108/108 MS: 3 ChangeBit-ChangeByte-InsertRepeatedBytes- 00:06:47.909 [2024-12-12 06:44:55.260870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.909 [2024-12-12 06:44:55.260903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.909 NEW_FUNC[1/1]: 0x14ff7e8 in nvmf_tgroup_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:576 00:06:47.909 #11 NEW cov: 12216 ft: 12608 corp: 3/217b lim: 320 exec/s: 0 rss: 71Mb L: 108/108 MS: 1 ChangeBit- 00:06:47.909 [2024-12-12 06:44:55.321117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:10100000 00:06:47.909 [2024-12-12 06:44:55.321143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.909 [2024-12-12 06:44:55.321202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:5 nsid:10101010 cdw10:10101010 cdw11:10101010 00:06:47.909 [2024-12-12 06:44:55.321216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.909 #12 NEW cov: 12230 ft: 12975 corp: 4/400b lim: 320 exec/s: 0 rss: 71Mb L: 183/183 MS: 1 InsertRepeatedBytes- 00:06:47.909 [2024-12-12 06:44:55.381267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:10100000 00:06:47.909 [2024-12-12 06:44:55.381292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.909 [2024-12-12 06:44:55.381358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:5 nsid:10101010 cdw10:10101010 cdw11:10101010 00:06:47.909 [2024-12-12 06:44:55.381372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.909 #13 NEW cov: 12315 ft: 13134 corp: 5/583b lim: 320 exec/s: 0 rss: 72Mb L: 183/183 MS: 1 ShuffleBytes- 00:06:48.167 [2024-12-12 06:44:55.441311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.167 [2024-12-12 06:44:55.441336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.167 #14 NEW cov: 12315 ft: 13434 corp: 6/691b lim: 320 exec/s: 0 rss: 72Mb L: 108/183 MS: 1 ShuffleBytes- 00:06:48.167 [2024-12-12 06:44:55.481394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.167 [2024-12-12 06:44:55.481419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.167 #15 NEW cov: 12315 ft: 13603 corp: 7/799b lim: 320 exec/s: 0 rss: 72Mb L: 108/183 MS: 1 ChangeBit- 00:06:48.167 [2024-12-12 06:44:55.521630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:10000000 00:06:48.167 [2024-12-12 06:44:55.521655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.168 [2024-12-12 06:44:55.521726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:5 nsid:10101010 cdw10:10101010 cdw11:10101010 00:06:48.168 [2024-12-12 06:44:55.521741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.168 #16 NEW cov: 12315 ft: 13748 corp: 8/983b lim: 320 exec/s: 0 rss: 72Mb L: 184/184 MS: 1 InsertByte- 00:06:48.168 [2024-12-12 06:44:55.561747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:10100000 00:06:48.168 [2024-12-12 06:44:55.561772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.168 [2024-12-12 06:44:55.561825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:5 nsid:10101010 cdw10:10101010 cdw11:10101010 00:06:48.168 [2024-12-12 06:44:55.561839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.168 #17 NEW cov: 12315 ft: 13801 corp: 9/1166b lim: 320 exec/s: 0 rss: 72Mb L: 183/184 MS: 1 ChangeBinInt- 00:06:48.168 [2024-12-12 06:44:55.601775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:df698538 cdw10:00000000 cdw11:00000000 00:06:48.168 [2024-12-12 06:44:55.601803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.168 #18 NEW cov: 12315 ft: 13868 corp: 10/1282b lim: 320 exec/s: 0 rss: 72Mb L: 116/184 MS: 1 CMP- DE: "8\205i\337,!\002\000"- 00:06:48.168 [2024-12-12 06:44:55.642123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:10100000 00:06:48.168 [2024-12-12 06:44:55.642152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.168 [2024-12-12 06:44:55.642206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:5 nsid:10101010 cdw10:10101010 cdw11:10101010 00:06:48.168 [2024-12-12 06:44:55.642220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.168 [2024-12-12 06:44:55.642271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.168 [2024-12-12 06:44:55.642284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.168 #19 NEW cov: 12315 ft: 14092 corp: 11/1490b lim: 320 exec/s: 0 rss: 72Mb L: 208/208 MS: 1 CrossOver- 00:06:48.168 [2024-12-12 06:44:55.682225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:10100000 00:06:48.168 [2024-12-12 06:44:55.682250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.168 [2024-12-12 06:44:55.682301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:5 nsid:10101010 cdw10:10101010 cdw11:10101010 00:06:48.168 [2024-12-12 06:44:55.682314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.168 [2024-12-12 06:44:55.682361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.168 [2024-12-12 06:44:55.682374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.426 #20 NEW cov: 12315 ft: 14105 corp: 12/1699b lim: 320 exec/s: 0 rss: 72Mb L: 209/209 MS: 1 InsertByte- 00:06:48.426 [2024-12-12 06:44:55.742166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: VIRTUALIZATION MANAGEMENT (1c) qid:0 cid:4 nsid:1c1c1c1c cdw10:1c1c1c1c cdw11:1c1c1c1c 00:06:48.426 [2024-12-12 06:44:55.742191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.426 #21 NEW cov: 12316 ft: 14127 corp: 13/1825b lim: 320 exec/s: 0 rss: 72Mb L: 126/209 MS: 1 InsertRepeatedBytes- 00:06:48.426 [2024-12-12 06:44:55.782661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:10100000 00:06:48.426 [2024-12-12 06:44:55.782686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.426 [2024-12-12 06:44:55.782758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:5050505 cdw10:05050505 cdw11:10101010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.426 [2024-12-12 06:44:55.782772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.426 [2024-12-12 06:44:55.782824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:6 nsid:10101010 cdw10:10101010 cdw11:00000010 00:06:48.426 [2024-12-12 06:44:55.782838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.426 [2024-12-12 06:44:55.782888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.426 [2024-12-12 06:44:55.782901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.426 NEW_FUNC[1/3]: 0x1977518 in nvme_get_sgl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:159 00:06:48.426 NEW_FUNC[2/3]: 0x1978088 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:06:48.426 #22 NEW cov: 12372 ft: 14672 corp: 14/2086b lim: 320 exec/s: 0 rss: 72Mb L: 261/261 MS: 1 InsertRepeatedBytes- 00:06:48.426 [2024-12-12 06:44:55.842686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:10100000 00:06:48.426 [2024-12-12 06:44:55.842711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.426 [2024-12-12 06:44:55.842763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:5 nsid:10101010 cdw10:10101010 cdw11:24101010 00:06:48.426 [2024-12-12 06:44:55.842776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.426 [2024-12-12 06:44:55.842825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.426 [2024-12-12 06:44:55.842837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.426 #23 NEW cov: 12372 ft: 14685 corp: 15/2295b lim: 320 exec/s: 0 rss: 72Mb L: 209/261 MS: 1 ChangeByte- 00:06:48.426 [2024-12-12 06:44:55.882783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:10000000 00:06:48.426 [2024-12-12 06:44:55.882808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.426 [2024-12-12 06:44:55.882876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:5 nsid:10101010 cdw10:1c1c1c1c cdw11:1c1c1c1c 00:06:48.426 [2024-12-12 06:44:55.882890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.426 [2024-12-12 06:44:55.882942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:6 nsid:10101010 cdw10:00101010 cdw11:00000000 00:06:48.426 [2024-12-12 06:44:55.882956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.426 #24 NEW cov: 12372 ft: 14744 corp: 16/2528b lim: 320 exec/s: 24 rss: 72Mb L: 233/261 MS: 1 CrossOver- 00:06:48.426 [2024-12-12 06:44:55.942772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.426 [2024-12-12 06:44:55.942798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.683 #25 NEW cov: 12372 ft: 14754 corp: 17/2636b lim: 320 exec/s: 25 rss: 72Mb L: 108/261 MS: 1 ChangeByte- 00:06:48.683 [2024-12-12 06:44:56.002917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: VIRTUALIZATION MANAGEMENT (1c) qid:0 cid:4 nsid:1c1c1c1c cdw10:1c1c1c1c cdw11:1c1c1c1c 00:06:48.683 [2024-12-12 06:44:56.002943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.683 #26 NEW cov: 12372 ft: 14776 corp: 18/2704b lim: 320 exec/s: 26 rss: 72Mb L: 68/261 MS: 1 EraseBytes- 00:06:48.683 [2024-12-12 06:44:56.063190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:10100000 00:06:48.683 [2024-12-12 06:44:56.063232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.683 [2024-12-12 06:44:56.063286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:5 nsid:10101010 cdw10:10101010 cdw11:10101010 00:06:48.683 [2024-12-12 06:44:56.063300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.683 #27 NEW cov: 12372 ft: 14809 corp: 19/2888b lim: 320 exec/s: 27 rss: 72Mb L: 184/261 MS: 1 InsertByte- 00:06:48.683 [2024-12-12 06:44:56.103437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:10100000 00:06:48.683 [2024-12-12 06:44:56.103465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.683 [2024-12-12 06:44:56.103519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:5 nsid:10101010 cdw10:10101010 cdw11:10101010 00:06:48.683 [2024-12-12 06:44:56.103532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.684 [2024-12-12 06:44:56.103584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.684 [2024-12-12 06:44:56.103597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.684 #28 NEW cov: 12372 ft: 14841 corp: 20/3131b lim: 320 exec/s: 28 rss: 72Mb L: 243/261 MS: 1 CrossOver- 00:06:48.684 [2024-12-12 06:44:56.163583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:10100000 00:06:48.684 [2024-12-12 06:44:56.163608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.684 [2024-12-12 06:44:56.163660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:5 nsid:10101010 cdw10:10101010 cdw11:10101010 00:06:48.684 [2024-12-12 06:44:56.163674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.684 [2024-12-12 06:44:56.163725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:a00 cdw10:00000000 cdw11:00000000 00:06:48.684 [2024-12-12 06:44:56.163739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.684 #29 NEW cov: 12372 ft: 14923 corp: 21/3374b lim: 320 exec/s: 29 rss: 72Mb L: 243/261 MS: 1 ChangeBinInt- 00:06:48.941 [2024-12-12 06:44:56.223830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:10100000 00:06:48.941 [2024-12-12 06:44:56.223855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.942 [2024-12-12 06:44:56.223925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:5 nsid:10101010 cdw10:10101010 cdw11:10101010 00:06:48.942 [2024-12-12 06:44:56.223939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.942 [2024-12-12 06:44:56.223988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.942 [2024-12-12 06:44:56.224001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.942 [2024-12-12 06:44:56.224053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.942 [2024-12-12 06:44:56.224066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.942 #30 NEW cov: 12372 ft: 14949 corp: 22/3644b lim: 320 exec/s: 30 rss: 72Mb L: 270/270 MS: 1 CopyPart- 00:06:48.942 [2024-12-12 06:44:56.263963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:cf100000 00:06:48.942 [2024-12-12 06:44:56.263987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.942 [2024-12-12 06:44:56.264042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:5 nsid:5101010 cdw10:05050505 cdw11:05050505 00:06:48.942 [2024-12-12 06:44:56.264056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.942 [2024-12-12 06:44:56.264107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:6 nsid:10101010 cdw10:10101010 cdw11:10101010 00:06:48.942 [2024-12-12 06:44:56.264123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.942 [2024-12-12 06:44:56.264192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.942 [2024-12-12 06:44:56.264206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.942 #31 NEW cov: 12372 ft: 14963 corp: 23/3920b lim: 320 exec/s: 31 rss: 72Mb L: 276/276 MS: 1 InsertRepeatedBytes- 00:06:48.942 [2024-12-12 06:44:56.324139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:10100000 00:06:48.942 [2024-12-12 06:44:56.324168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.942 [2024-12-12 06:44:56.324238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:5 nsid:10101010 cdw10:10101010 cdw11:10101010 00:06:48.942 [2024-12-12 06:44:56.324253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.942 [2024-12-12 06:44:56.324310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:6 nsid:10101010 cdw10:00000000 cdw11:00000000 00:06:48.942 [2024-12-12 06:44:56.324324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.942 [2024-12-12 06:44:56.324373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.942 [2024-12-12 06:44:56.324386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.942 #32 NEW cov: 12372 ft: 14971 corp: 24/4233b lim: 320 exec/s: 32 rss: 73Mb L: 313/313 MS: 1 CopyPart- 00:06:48.942 [2024-12-12 06:44:56.383980] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:3c3c3c3c SGL TRANSPORT DATA BLOCK TRANSPORT 0x3c3c3c3c3c3c3c3c 00:06:48.942 [2024-12-12 06:44:56.384005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.942 #33 NEW cov: 12392 ft: 15038 corp: 25/4318b lim: 320 exec/s: 33 rss: 73Mb L: 85/313 MS: 1 InsertRepeatedBytes- 00:06:48.942 [2024-12-12 06:44:56.424294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:df698538 cdw10:00000000 cdw11:00000000 00:06:48.942 [2024-12-12 06:44:56.424319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.942 [2024-12-12 06:44:56.424371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.942 [2024-12-12 06:44:56.424384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.942 [2024-12-12 06:44:56.424434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:290000 cdw10:00000010 cdw11:00000000 00:06:48.942 [2024-12-12 06:44:56.424448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.200 #34 NEW cov: 12392 ft: 15048 corp: 26/4534b lim: 320 exec/s: 34 rss: 73Mb L: 216/313 MS: 1 CrossOver- 00:06:49.200 [2024-12-12 06:44:56.484241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:df698538 cdw10:00000000 cdw11:00000000 00:06:49.200 [2024-12-12 06:44:56.484266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.200 #35 NEW cov: 12392 ft: 15072 corp: 27/4642b lim: 320 exec/s: 35 rss: 73Mb L: 108/313 MS: 1 PersAutoDict- DE: "8\205i\337,!\002\000"- 00:06:49.200 [2024-12-12 06:44:56.544731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:10000000 00:06:49.200 [2024-12-12 06:44:56.544756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.200 [2024-12-12 06:44:56.544812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:5 nsid:10101010 cdw10:1c1c1c1c cdw11:1c1c1c1c 00:06:49.200 [2024-12-12 06:44:56.544826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.200 [2024-12-12 06:44:56.544876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:6 nsid:10101010 cdw10:00101010 cdw11:00000000 00:06:49.200 [2024-12-12 06:44:56.544889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.200 [2024-12-12 06:44:56.544938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:65656565 cdw11:65656565 00:06:49.200 [2024-12-12 06:44:56.544951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.200 #36 NEW cov: 12392 ft: 15075 corp: 28/4916b lim: 320 exec/s: 36 rss: 73Mb L: 274/313 MS: 1 InsertRepeatedBytes- 00:06:49.200 [2024-12-12 06:44:56.604909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:cf100000 00:06:49.200 [2024-12-12 06:44:56.604933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.200 [2024-12-12 06:44:56.605009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2d) qid:0 cid:5 nsid:5101010 cdw10:05050505 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.200 [2024-12-12 06:44:56.605023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.200 [2024-12-12 06:44:56.605076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:6 nsid:10101010 cdw10:10101010 cdw11:10101010 00:06:49.200 [2024-12-12 06:44:56.605090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.200 [2024-12-12 06:44:56.605139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.200 [2024-12-12 06:44:56.605158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.200 #37 NEW cov: 12394 ft: 15086 corp: 29/5192b lim: 320 exec/s: 37 rss: 73Mb L: 276/313 MS: 1 CMP- DE: "\346\030\253t-!\002\000"- 00:06:49.200 [2024-12-12 06:44:56.664827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:10100000 00:06:49.200 [2024-12-12 06:44:56.664853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.200 [2024-12-12 06:44:56.664907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:5 nsid:10101010 cdw10:10101010 cdw11:10101010 00:06:49.200 [2024-12-12 06:44:56.664921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.200 #38 NEW cov: 12394 ft: 15135 corp: 30/5375b lim: 320 exec/s: 38 rss: 73Mb L: 183/313 MS: 1 ChangeByte- 00:06:49.200 [2024-12-12 06:44:56.705123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.200 [2024-12-12 06:44:56.705155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.200 [2024-12-12 06:44:56.705207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.200 [2024-12-12 06:44:56.705222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.200 [2024-12-12 06:44:56.705273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.200 [2024-12-12 06:44:56.705290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.458 #39 NEW cov: 12394 ft: 15149 corp: 31/5569b lim: 320 exec/s: 39 rss: 73Mb L: 194/313 MS: 1 CopyPart- 00:06:49.458 [2024-12-12 06:44:56.745224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.458 [2024-12-12 06:44:56.745249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.458 [2024-12-12 06:44:56.745299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.458 [2024-12-12 06:44:56.745312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.458 [2024-12-12 06:44:56.745363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.458 [2024-12-12 06:44:56.745376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.458 #40 NEW cov: 12394 ft: 15154 corp: 32/5763b lim: 320 exec/s: 40 rss: 73Mb L: 194/313 MS: 1 CopyPart- 00:06:49.458 [2024-12-12 06:44:56.805582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:cf100000 00:06:49.458 [2024-12-12 06:44:56.805606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.458 [2024-12-12 06:44:56.805667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2d) qid:0 cid:5 nsid:5101010 cdw10:05050505 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.458 [2024-12-12 06:44:56.805681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.458 [2024-12-12 06:44:56.805733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:6 nsid:10101010 cdw10:10101010 cdw11:10101010 00:06:49.458 [2024-12-12 06:44:56.805747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.458 [2024-12-12 06:44:56.805798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00002900 00:06:49.458 [2024-12-12 06:44:56.805811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.458 #41 NEW cov: 12394 ft: 15159 corp: 33/6047b lim: 320 exec/s: 41 rss: 73Mb L: 284/313 MS: 1 PersAutoDict- DE: "\346\030\253t-!\002\000"- 00:06:49.458 [2024-12-12 06:44:56.865671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:10100000 00:06:49.458 [2024-12-12 06:44:56.865697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.458 [2024-12-12 06:44:56.865750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:5 nsid:10101010 cdw10:10101010 cdw11:10101010 00:06:49.458 [2024-12-12 06:44:56.865763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.459 [2024-12-12 06:44:56.865814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.459 [2024-12-12 06:44:56.865828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.459 [2024-12-12 06:44:56.865877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.459 [2024-12-12 06:44:56.865890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.459 #42 NEW cov: 12394 ft: 15177 corp: 34/6332b lim: 320 exec/s: 42 rss: 73Mb L: 285/313 MS: 1 InsertRepeatedBytes- 00:06:49.459 [2024-12-12 06:44:56.905650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:10100000 00:06:49.459 [2024-12-12 06:44:56.905678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.459 [2024-12-12 06:44:56.905748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: FIRMWARE COMMIT (10) qid:0 cid:5 nsid:10101010 cdw10:10101010 cdw11:10101010 00:06:49.459 [2024-12-12 06:44:56.905762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.459 [2024-12-12 06:44:56.905814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.459 [2024-12-12 06:44:56.905827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.459 #43 NEW cov: 12394 ft: 15185 corp: 35/6575b lim: 320 exec/s: 21 rss: 73Mb L: 243/313 MS: 1 ShuffleBytes- 00:06:49.459 #43 DONE cov: 12394 ft: 15185 corp: 35/6575b lim: 320 exec/s: 21 rss: 73Mb 00:06:49.459 ###### Recommended dictionary. ###### 00:06:49.459 "8\205i\337,!\002\000" # Uses: 1 00:06:49.459 "\346\030\253t-!\002\000" # Uses: 1 00:06:49.459 ###### End of recommended dictionary. ###### 00:06:49.459 Done 43 runs in 2 second(s) 00:06:49.717 06:44:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:49.717 06:44:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:49.717 06:44:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:49.717 06:44:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:49.717 06:44:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:49.717 06:44:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:49.717 06:44:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:49.717 06:44:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:49.717 06:44:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:49.717 06:44:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:49.717 06:44:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:49.717 06:44:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:49.717 06:44:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:49.717 06:44:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:49.717 06:44:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:49.717 06:44:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:49.717 06:44:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:49.717 06:44:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:49.717 06:44:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:49.717 [2024-12-12 06:44:57.079177] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:49.717 [2024-12-12 06:44:57.079246] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1153034 ] 00:06:49.975 [2024-12-12 06:44:57.270870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.975 [2024-12-12 06:44:57.304474] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.975 [2024-12-12 06:44:57.363484] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.975 [2024-12-12 06:44:57.379798] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:49.975 INFO: Running with entropic power schedule (0xFF, 100). 00:06:49.975 INFO: Seed: 1661223930 00:06:49.975 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:06:49.975 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:06:49.975 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:49.975 INFO: A corpus is not provided, starting from an empty corpus 00:06:49.975 #2 INITED exec/s: 0 rss: 66Mb 00:06:49.975 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:49.975 This may also happen if the target rejected all inputs we tried so far 00:06:49.975 [2024-12-12 06:44:57.438558] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:49.975 [2024-12-12 06:44:57.438677] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:49.975 [2024-12-12 06:44:57.438779] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:49.975 [2024-12-12 06:44:57.438992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.975 [2024-12-12 06:44:57.439023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.975 [2024-12-12 06:44:57.439078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.975 [2024-12-12 06:44:57.439092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.975 [2024-12-12 06:44:57.439145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.975 [2024-12-12 06:44:57.439163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.541 NEW_FUNC[1/716]: 0x43c4e8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:50.541 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:50.541 #3 NEW cov: 12201 ft: 12167 corp: 2/22b lim: 30 exec/s: 0 rss: 73Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:06:50.541 [2024-12-12 06:44:57.779569] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.541 [2024-12-12 06:44:57.779703] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.541 [2024-12-12 06:44:57.779947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.541 [2024-12-12 06:44:57.779997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.541 [2024-12-12 06:44:57.780077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.541 [2024-12-12 06:44:57.780103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.541 NEW_FUNC[1/1]: 0x1fb1c98 in thread_update_stats /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:953 00:06:50.541 #6 NEW cov: 12315 ft: 13133 corp: 3/34b lim: 30 exec/s: 0 rss: 73Mb L: 12/21 MS: 3 ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:06:50.541 [2024-12-12 06:44:57.829529] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.541 [2024-12-12 06:44:57.829645] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.541 [2024-12-12 06:44:57.829753] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.541 [2024-12-12 06:44:57.829967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.541 [2024-12-12 06:44:57.829994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.541 [2024-12-12 06:44:57.830049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.541 [2024-12-12 06:44:57.830063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.541 [2024-12-12 06:44:57.830116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.541 [2024-12-12 06:44:57.830130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.541 #7 NEW cov: 12321 ft: 13412 corp: 4/56b lim: 30 exec/s: 0 rss: 73Mb L: 22/22 MS: 1 InsertByte- 00:06:50.541 [2024-12-12 06:44:57.889745] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.541 [2024-12-12 06:44:57.889862] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.541 [2024-12-12 06:44:57.889970] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.541 [2024-12-12 06:44:57.890204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.541 [2024-12-12 06:44:57.890231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.541 [2024-12-12 06:44:57.890286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.541 [2024-12-12 06:44:57.890300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.541 [2024-12-12 06:44:57.890355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.541 [2024-12-12 06:44:57.890369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.541 #8 NEW cov: 12406 ft: 13637 corp: 5/78b lim: 30 exec/s: 0 rss: 73Mb L: 22/22 MS: 1 ChangeBit- 00:06:50.541 [2024-12-12 06:44:57.949862] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.541 [2024-12-12 06:44:57.950088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.541 [2024-12-12 06:44:57.950115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.541 #9 NEW cov: 12406 ft: 14082 corp: 6/89b lim: 30 exec/s: 0 rss: 73Mb L: 11/22 MS: 1 EraseBytes- 00:06:50.541 [2024-12-12 06:44:58.010036] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.541 [2024-12-12 06:44:58.010158] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.541 [2024-12-12 06:44:58.010268] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.541 [2024-12-12 06:44:58.010474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0003024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.541 [2024-12-12 06:44:58.010500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.541 [2024-12-12 06:44:58.010556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.541 [2024-12-12 06:44:58.010573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.541 [2024-12-12 06:44:58.010628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.541 [2024-12-12 06:44:58.010641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.541 #10 NEW cov: 12406 ft: 14190 corp: 7/112b lim: 30 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 CMP- DE: "\000\003"- 00:06:50.541 [2024-12-12 06:44:58.050163] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.542 [2024-12-12 06:44:58.050307] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.542 [2024-12-12 06:44:58.050431] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.542 [2024-12-12 06:44:58.050657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0003024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.542 [2024-12-12 06:44:58.050684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.542 [2024-12-12 06:44:58.050738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.542 [2024-12-12 06:44:58.050753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.542 [2024-12-12 06:44:58.050808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.542 [2024-12-12 06:44:58.050822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.799 #11 NEW cov: 12406 ft: 14280 corp: 8/135b lim: 30 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 CopyPart- 00:06:50.799 [2024-12-12 06:44:58.110336] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.799 [2024-12-12 06:44:58.110456] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.799 [2024-12-12 06:44:58.110567] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.799 [2024-12-12 06:44:58.110783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0003024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.799 [2024-12-12 06:44:58.110809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.799 [2024-12-12 06:44:58.110866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.799 [2024-12-12 06:44:58.110880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.799 [2024-12-12 06:44:58.110934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.799 [2024-12-12 06:44:58.110949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.799 #12 NEW cov: 12406 ft: 14371 corp: 9/158b lim: 30 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 CrossOver- 00:06:50.799 [2024-12-12 06:44:58.150417] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200007373 00:06:50.799 [2024-12-12 06:44:58.150549] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.799 [2024-12-12 06:44:58.150660] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.800 [2024-12-12 06:44:58.150767] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.800 [2024-12-12 06:44:58.150974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0003024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.800 [2024-12-12 06:44:58.151003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.800 [2024-12-12 06:44:58.151061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:734a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.800 [2024-12-12 06:44:58.151075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.800 [2024-12-12 06:44:58.151132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.800 [2024-12-12 06:44:58.151146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.800 [2024-12-12 06:44:58.151209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.800 [2024-12-12 06:44:58.151234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.800 #13 NEW cov: 12406 ft: 14890 corp: 10/184b lim: 30 exec/s: 0 rss: 74Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:06:50.800 [2024-12-12 06:44:58.210509] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.800 [2024-12-12 06:44:58.210625] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:50.800 [2024-12-12 06:44:58.210848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.800 [2024-12-12 06:44:58.210874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.800 [2024-12-12 06:44:58.210932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.800 [2024-12-12 06:44:58.210946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.800 #14 NEW cov: 12406 ft: 14959 corp: 11/196b lim: 30 exec/s: 0 rss: 74Mb L: 12/26 MS: 1 ChangeBinInt- 00:06:50.800 [2024-12-12 06:44:58.270702] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.800 [2024-12-12 06:44:58.270832] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.800 [2024-12-12 06:44:58.270940] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:50.800 [2024-12-12 06:44:58.271156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0003024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.800 [2024-12-12 06:44:58.271183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.800 [2024-12-12 06:44:58.271240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.800 [2024-12-12 06:44:58.271254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.800 [2024-12-12 06:44:58.271308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.800 [2024-12-12 06:44:58.271322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.800 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:06:50.800 #15 NEW cov: 12429 ft: 14972 corp: 12/219b lim: 30 exec/s: 0 rss: 74Mb L: 23/26 MS: 1 ChangeBit- 00:06:51.058 [2024-12-12 06:44:58.330815] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.058 [2024-12-12 06:44:58.331046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.058 [2024-12-12 06:44:58.331072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.058 #16 NEW cov: 12429 ft: 15093 corp: 13/226b lim: 30 exec/s: 0 rss: 74Mb L: 7/26 MS: 1 CrossOver- 00:06:51.058 [2024-12-12 06:44:58.370932] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.058 [2024-12-12 06:44:58.371175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4a4a022e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.058 [2024-12-12 06:44:58.371202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.058 #17 NEW cov: 12429 ft: 15110 corp: 14/233b lim: 30 exec/s: 0 rss: 74Mb L: 7/26 MS: 1 ChangeByte- 00:06:51.058 [2024-12-12 06:44:58.431184] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200007373 00:06:51.058 [2024-12-12 06:44:58.431316] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.058 [2024-12-12 06:44:58.431428] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.058 [2024-12-12 06:44:58.431533] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003f4a 00:06:51.058 [2024-12-12 06:44:58.431752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0003024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.058 [2024-12-12 06:44:58.431778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.058 [2024-12-12 06:44:58.431836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:734a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.058 [2024-12-12 06:44:58.431850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.058 [2024-12-12 06:44:58.431905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.058 [2024-12-12 06:44:58.431919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.058 [2024-12-12 06:44:58.431973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.058 [2024-12-12 06:44:58.431987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.058 #18 NEW cov: 12429 ft: 15119 corp: 15/259b lim: 30 exec/s: 18 rss: 74Mb L: 26/26 MS: 1 ChangeByte- 00:06:51.058 [2024-12-12 06:44:58.491393] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.058 [2024-12-12 06:44:58.491526] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.058 [2024-12-12 06:44:58.491635] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.058 [2024-12-12 06:44:58.491742] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.058 [2024-12-12 06:44:58.491964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.058 [2024-12-12 06:44:58.491991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.058 [2024-12-12 06:44:58.492048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.058 [2024-12-12 06:44:58.492062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.058 [2024-12-12 06:44:58.492121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.058 [2024-12-12 06:44:58.492135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.058 [2024-12-12 06:44:58.492198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:4ae6024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.058 [2024-12-12 06:44:58.492212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.058 #19 NEW cov: 12429 ft: 15129 corp: 16/285b lim: 30 exec/s: 19 rss: 74Mb L: 26/26 MS: 1 CrossOver- 00:06:51.058 [2024-12-12 06:44:58.531398] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000fffb 00:06:51.058 [2024-12-12 06:44:58.531529] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.058 [2024-12-12 06:44:58.531746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.058 [2024-12-12 06:44:58.531772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.058 [2024-12-12 06:44:58.531829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.058 [2024-12-12 06:44:58.531843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.058 #20 NEW cov: 12429 ft: 15152 corp: 17/297b lim: 30 exec/s: 20 rss: 74Mb L: 12/26 MS: 1 ChangeBit- 00:06:51.058 [2024-12-12 06:44:58.571559] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.058 [2024-12-12 06:44:58.571693] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.058 [2024-12-12 06:44:58.571803] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.058 [2024-12-12 06:44:58.571912] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000910a 00:06:51.058 [2024-12-12 06:44:58.572128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0003024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.058 [2024-12-12 06:44:58.572161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.058 [2024-12-12 06:44:58.572218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.058 [2024-12-12 06:44:58.572232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.058 [2024-12-12 06:44:58.572286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.058 [2024-12-12 06:44:58.572299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.058 [2024-12-12 06:44:58.572354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.058 [2024-12-12 06:44:58.572367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.316 #21 NEW cov: 12429 ft: 15163 corp: 18/321b lim: 30 exec/s: 21 rss: 74Mb L: 24/26 MS: 1 InsertByte- 00:06:51.317 [2024-12-12 06:44:58.611662] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.317 [2024-12-12 06:44:58.611794] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.317 [2024-12-12 06:44:58.611905] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.317 [2024-12-12 06:44:58.612118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0003024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.317 [2024-12-12 06:44:58.612155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.317 [2024-12-12 06:44:58.612214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.317 [2024-12-12 06:44:58.612229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.317 [2024-12-12 06:44:58.612285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a0244 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.317 [2024-12-12 06:44:58.612299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.317 #22 NEW cov: 12429 ft: 15227 corp: 19/344b lim: 30 exec/s: 22 rss: 74Mb L: 23/26 MS: 1 ChangeBinInt- 00:06:51.317 [2024-12-12 06:44:58.651689] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (76076) > buf size (4096) 00:06:51.317 [2024-12-12 06:44:58.651925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4a4a0034 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.317 [2024-12-12 06:44:58.651951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.317 #24 NEW cov: 12452 ft: 15265 corp: 20/353b lim: 30 exec/s: 24 rss: 74Mb L: 9/26 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:51.317 [2024-12-12 06:44:58.691860] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.317 [2024-12-12 06:44:58.691996] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.317 [2024-12-12 06:44:58.692213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.317 [2024-12-12 06:44:58.692240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.317 [2024-12-12 06:44:58.692308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.317 [2024-12-12 06:44:58.692323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.317 #25 NEW cov: 12452 ft: 15279 corp: 21/365b lim: 30 exec/s: 25 rss: 74Mb L: 12/26 MS: 1 CopyPart- 00:06:51.317 [2024-12-12 06:44:58.732029] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200007373 00:06:51.317 [2024-12-12 06:44:58.732172] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.317 [2024-12-12 06:44:58.732282] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.317 [2024-12-12 06:44:58.732392] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.317 [2024-12-12 06:44:58.732603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0003024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.317 [2024-12-12 06:44:58.732629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.317 [2024-12-12 06:44:58.732686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:734a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.317 [2024-12-12 06:44:58.732701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.317 [2024-12-12 06:44:58.732756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.317 [2024-12-12 06:44:58.732770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.317 [2024-12-12 06:44:58.732829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.317 [2024-12-12 06:44:58.732842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.317 #26 NEW cov: 12452 ft: 15309 corp: 22/391b lim: 30 exec/s: 26 rss: 74Mb L: 26/26 MS: 1 PersAutoDict- DE: "\000\003"- 00:06:51.317 [2024-12-12 06:44:58.772029] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.317 [2024-12-12 06:44:58.772268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.317 [2024-12-12 06:44:58.772294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.317 #27 NEW cov: 12452 ft: 15353 corp: 23/402b lim: 30 exec/s: 27 rss: 74Mb L: 11/26 MS: 1 ChangeBit- 00:06:51.317 [2024-12-12 06:44:58.832201] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002f21 00:06:51.317 [2024-12-12 06:44:58.832409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6ba881aa cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.317 [2024-12-12 06:44:58.832435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.575 #28 NEW cov: 12452 ft: 15381 corp: 24/411b lim: 30 exec/s: 28 rss: 74Mb L: 9/26 MS: 1 CMP- DE: "k\250\252\011/!\002\000"- 00:06:51.575 [2024-12-12 06:44:58.892445] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (600364) > buf size (4096) 00:06:51.575 [2024-12-12 06:44:58.892580] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.575 [2024-12-12 06:44:58.892691] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.575 [2024-12-12 06:44:58.892911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.575 [2024-12-12 06:44:58.892938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.575 [2024-12-12 06:44:58.892996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:034a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.575 [2024-12-12 06:44:58.893011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.575 [2024-12-12 06:44:58.893069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.575 [2024-12-12 06:44:58.893083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.575 #29 NEW cov: 12452 ft: 15396 corp: 25/433b lim: 30 exec/s: 29 rss: 74Mb L: 22/26 MS: 1 PersAutoDict- DE: "\000\003"- 00:06:51.575 [2024-12-12 06:44:58.932575] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10452) > buf size (4096) 00:06:51.575 [2024-12-12 06:44:58.932706] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x34ff 00:06:51.575 [2024-12-12 06:44:58.932814] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.575 [2024-12-12 06:44:58.933043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a340034 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.575 [2024-12-12 06:44:58.933069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.575 [2024-12-12 06:44:58.933129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:34340034 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.575 [2024-12-12 06:44:58.933147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.575 [2024-12-12 06:44:58.933209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.575 [2024-12-12 06:44:58.933234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.575 #30 NEW cov: 12452 ft: 15404 corp: 26/455b lim: 30 exec/s: 30 rss: 74Mb L: 22/26 MS: 1 InsertRepeatedBytes- 00:06:51.575 [2024-12-12 06:44:58.972730] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200007373 00:06:51.575 [2024-12-12 06:44:58.972849] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.575 [2024-12-12 06:44:58.972958] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.575 [2024-12-12 06:44:58.973063] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.575 [2024-12-12 06:44:58.973303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0003024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.575 [2024-12-12 06:44:58.973330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.575 [2024-12-12 06:44:58.973389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:734a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.575 [2024-12-12 06:44:58.973405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.575 [2024-12-12 06:44:58.973464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.575 [2024-12-12 06:44:58.973478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.575 [2024-12-12 06:44:58.973535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.575 [2024-12-12 06:44:58.973549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.575 #31 NEW cov: 12452 ft: 15415 corp: 27/481b lim: 30 exec/s: 31 rss: 74Mb L: 26/26 MS: 1 CrossOver- 00:06:51.575 [2024-12-12 06:44:59.012735] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002121 00:06:51.575 [2024-12-12 06:44:59.012970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6ba883aa cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.575 [2024-12-12 06:44:59.012997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.575 #32 NEW cov: 12452 ft: 15428 corp: 28/490b lim: 30 exec/s: 32 rss: 74Mb L: 9/26 MS: 1 CopyPart- 00:06:51.575 [2024-12-12 06:44:59.073032] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (524304) > buf size (4096) 00:06:51.575 [2024-12-12 06:44:59.073162] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.576 [2024-12-12 06:44:59.073277] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.576 [2024-12-12 06:44:59.073388] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.576 [2024-12-12 06:44:59.073620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0003024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.576 [2024-12-12 06:44:59.073647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.576 [2024-12-12 06:44:59.073705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:734a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.576 [2024-12-12 06:44:59.073720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.576 [2024-12-12 06:44:59.073781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.576 [2024-12-12 06:44:59.073796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.576 [2024-12-12 06:44:59.073854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.576 [2024-12-12 06:44:59.073868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.832 #33 NEW cov: 12452 ft: 15463 corp: 29/516b lim: 30 exec/s: 33 rss: 74Mb L: 26/26 MS: 1 ChangeByte- 00:06:51.832 [2024-12-12 06:44:59.133145] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (600364) > buf size (4096) 00:06:51.832 [2024-12-12 06:44:59.133271] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.832 [2024-12-12 06:44:59.133379] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.832 [2024-12-12 06:44:59.133594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.832 [2024-12-12 06:44:59.133620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.832 [2024-12-12 06:44:59.133676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:034a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.832 [2024-12-12 06:44:59.133689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.832 [2024-12-12 06:44:59.133745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.832 [2024-12-12 06:44:59.133758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.833 #34 NEW cov: 12452 ft: 15500 corp: 30/538b lim: 30 exec/s: 34 rss: 75Mb L: 22/26 MS: 1 ChangeByte- 00:06:51.833 [2024-12-12 06:44:59.193310] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200007373 00:06:51.833 [2024-12-12 06:44:59.193430] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.833 [2024-12-12 06:44:59.193553] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.833 [2024-12-12 06:44:59.193659] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.833 [2024-12-12 06:44:59.193878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0003024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.833 [2024-12-12 06:44:59.193904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.833 [2024-12-12 06:44:59.193964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:734a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.833 [2024-12-12 06:44:59.193979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.833 [2024-12-12 06:44:59.194036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.833 [2024-12-12 06:44:59.194049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.833 [2024-12-12 06:44:59.194106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.833 [2024-12-12 06:44:59.194128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.833 #35 NEW cov: 12452 ft: 15503 corp: 31/564b lim: 30 exec/s: 35 rss: 75Mb L: 26/26 MS: 1 ShuffleBytes- 00:06:51.833 [2024-12-12 06:44:59.233351] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004a4a 00:06:51.833 [2024-12-12 06:44:59.233469] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:51.833 [2024-12-12 06:44:59.233693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.833 [2024-12-12 06:44:59.233720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.833 [2024-12-12 06:44:59.233777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:fffb83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.833 [2024-12-12 06:44:59.233791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.833 #36 NEW cov: 12452 ft: 15526 corp: 32/578b lim: 30 exec/s: 36 rss: 75Mb L: 14/26 MS: 1 CrossOver- 00:06:51.833 [2024-12-12 06:44:59.293622] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.833 [2024-12-12 06:44:59.293745] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.833 [2024-12-12 06:44:59.293853] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:51.833 [2024-12-12 06:44:59.293964] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a02 00:06:51.833 [2024-12-12 06:44:59.294195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.833 [2024-12-12 06:44:59.294221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.833 [2024-12-12 06:44:59.294279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.833 [2024-12-12 06:44:59.294294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.833 [2024-12-12 06:44:59.294351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.833 [2024-12-12 06:44:59.294366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.833 [2024-12-12 06:44:59.294420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.833 [2024-12-12 06:44:59.294434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.833 #37 NEW cov: 12452 ft: 15549 corp: 33/602b lim: 30 exec/s: 37 rss: 75Mb L: 24/26 MS: 1 CopyPart- 00:06:51.833 [2024-12-12 06:44:59.333641] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (600364) > buf size (4096) 00:06:51.833 [2024-12-12 06:44:59.333861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.833 [2024-12-12 06:44:59.333888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.091 #38 NEW cov: 12452 ft: 15564 corp: 34/609b lim: 30 exec/s: 38 rss: 75Mb L: 7/26 MS: 1 CrossOver- 00:06:52.091 [2024-12-12 06:44:59.393860] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:52.091 [2024-12-12 06:44:59.393977] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:52.091 [2024-12-12 06:44:59.394087] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004ae6 00:06:52.091 [2024-12-12 06:44:59.394311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.091 [2024-12-12 06:44:59.394337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.091 [2024-12-12 06:44:59.394393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.091 [2024-12-12 06:44:59.394407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.091 [2024-12-12 06:44:59.394465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.091 [2024-12-12 06:44:59.394478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.091 #39 NEW cov: 12452 ft: 15570 corp: 35/631b lim: 30 exec/s: 39 rss: 75Mb L: 22/26 MS: 1 CopyPart- 00:06:52.091 [2024-12-12 06:44:59.433992] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200007373 00:06:52.091 [2024-12-12 06:44:59.434126] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:52.091 [2024-12-12 06:44:59.434259] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:52.091 [2024-12-12 06:44:59.434369] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000a4a 00:06:52.091 [2024-12-12 06:44:59.434594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0003024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.091 [2024-12-12 06:44:59.434622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.091 [2024-12-12 06:44:59.434681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:734a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.091 [2024-12-12 06:44:59.434695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.092 [2024-12-12 06:44:59.434753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.092 [2024-12-12 06:44:59.434767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.092 [2024-12-12 06:44:59.434824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:4a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.092 [2024-12-12 06:44:59.434837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.092 #40 NEW cov: 12452 ft: 15592 corp: 36/658b lim: 30 exec/s: 20 rss: 75Mb L: 27/27 MS: 1 CrossOver- 00:06:52.092 #40 DONE cov: 12452 ft: 15592 corp: 36/658b lim: 30 exec/s: 20 rss: 75Mb 00:06:52.092 ###### Recommended dictionary. ###### 00:06:52.092 "\000\003" # Uses: 2 00:06:52.092 "k\250\252\011/!\002\000" # Uses: 0 00:06:52.092 ###### End of recommended dictionary. ###### 00:06:52.092 Done 40 runs in 2 second(s) 00:06:52.092 06:44:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:52.092 06:44:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:52.092 06:44:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:52.092 06:44:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:52.092 06:44:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:52.092 06:44:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:52.092 06:44:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:52.092 06:44:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:52.092 06:44:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:52.092 06:44:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:52.092 06:44:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:52.092 06:44:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:52.092 06:44:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:52.092 06:44:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:52.092 06:44:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:52.092 06:44:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:52.092 06:44:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:52.092 06:44:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:52.092 06:44:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:52.350 [2024-12-12 06:44:59.626616] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:52.350 [2024-12-12 06:44:59.626684] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1153577 ] 00:06:52.350 [2024-12-12 06:44:59.815618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.350 [2024-12-12 06:44:59.848387] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.609 [2024-12-12 06:44:59.907654] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.609 [2024-12-12 06:44:59.923973] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:52.609 INFO: Running with entropic power schedule (0xFF, 100). 00:06:52.609 INFO: Seed: 4207239753 00:06:52.609 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:06:52.609 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:06:52.609 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:52.609 INFO: A corpus is not provided, starting from an empty corpus 00:06:52.609 #2 INITED exec/s: 0 rss: 65Mb 00:06:52.609 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:52.609 This may also happen if the target rejected all inputs we tried so far 00:06:52.609 [2024-12-12 06:44:59.979651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0074 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.609 [2024-12-12 06:44:59.979680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.609 [2024-12-12 06:44:59.979752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.609 [2024-12-12 06:44:59.979767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.610 [2024-12-12 06:44:59.979822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.610 [2024-12-12 06:44:59.979835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.868 NEW_FUNC[1/715]: 0x43ef98 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:52.868 NEW_FUNC[2/715]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:52.868 #12 NEW cov: 12157 ft: 12157 corp: 2/25b lim: 35 exec/s: 0 rss: 72Mb L: 24/24 MS: 5 ChangeByte-CMP-ChangeBit-ChangeBinInt-InsertRepeatedBytes- DE: "t\000"- 00:06:52.868 [2024-12-12 06:45:00.300440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.868 [2024-12-12 06:45:00.300481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.868 NEW_FUNC[1/1]: 0x10868b8 in _sock_flush /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1347 00:06:52.868 #14 NEW cov: 12271 ft: 13329 corp: 3/44b lim: 35 exec/s: 0 rss: 72Mb L: 19/24 MS: 2 ChangeBinInt-CrossOver- 00:06:52.869 [2024-12-12 06:45:00.350464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0074 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.869 [2024-12-12 06:45:00.350492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.869 [2024-12-12 06:45:00.350545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.869 [2024-12-12 06:45:00.350559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.869 [2024-12-12 06:45:00.350612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.869 [2024-12-12 06:45:00.350626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.128 #15 NEW cov: 12277 ft: 13556 corp: 4/68b lim: 35 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 ShuffleBytes- 00:06:53.128 [2024-12-12 06:45:00.410557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.128 [2024-12-12 06:45:00.410583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.128 #16 NEW cov: 12362 ft: 13895 corp: 5/88b lim: 35 exec/s: 0 rss: 72Mb L: 20/24 MS: 1 CrossOver- 00:06:53.128 [2024-12-12 06:45:00.470581] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.128 [2024-12-12 06:45:00.470815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0074 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.128 [2024-12-12 06:45:00.470842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.128 [2024-12-12 06:45:00.470896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:050000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.128 [2024-12-12 06:45:00.470910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.128 [2024-12-12 06:45:00.470965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00ff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.128 [2024-12-12 06:45:00.470981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.128 #17 NEW cov: 12373 ft: 14154 corp: 6/112b lim: 35 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 ChangeBinInt- 00:06:53.128 [2024-12-12 06:45:00.511020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0074 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.128 [2024-12-12 06:45:00.511045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.128 [2024-12-12 06:45:00.511101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.128 [2024-12-12 06:45:00.511118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.128 [2024-12-12 06:45:00.511177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.128 [2024-12-12 06:45:00.511191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.128 [2024-12-12 06:45:00.511244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.128 [2024-12-12 06:45:00.511258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.128 #18 NEW cov: 12373 ft: 14684 corp: 7/145b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 CrossOver- 00:06:53.128 [2024-12-12 06:45:00.550989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.128 [2024-12-12 06:45:00.551015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.128 [2024-12-12 06:45:00.551085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.128 [2024-12-12 06:45:00.551099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.128 [2024-12-12 06:45:00.551156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.128 [2024-12-12 06:45:00.551170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.128 #19 NEW cov: 12373 ft: 14754 corp: 8/168b lim: 35 exec/s: 0 rss: 72Mb L: 23/33 MS: 1 EraseBytes- 00:06:53.128 [2024-12-12 06:45:00.611127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.128 [2024-12-12 06:45:00.611159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.128 #20 NEW cov: 12373 ft: 14812 corp: 9/187b lim: 35 exec/s: 0 rss: 72Mb L: 19/33 MS: 1 ChangeByte- 00:06:53.388 [2024-12-12 06:45:00.651309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0074 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.388 [2024-12-12 06:45:00.651335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.388 [2024-12-12 06:45:00.651389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fe2d00ff cdw11:2f0072ce SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.388 [2024-12-12 06:45:00.651403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.388 #21 NEW cov: 12373 ft: 14967 corp: 10/211b lim: 35 exec/s: 0 rss: 72Mb L: 24/33 MS: 1 CMP- DE: "\376-r\316/!\002\000"- 00:06:53.388 [2024-12-12 06:45:00.711623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0074 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.388 [2024-12-12 06:45:00.711649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.388 [2024-12-12 06:45:00.711707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.388 [2024-12-12 06:45:00.711721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.388 [2024-12-12 06:45:00.711778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.388 [2024-12-12 06:45:00.711792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.388 [2024-12-12 06:45:00.711848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.388 [2024-12-12 06:45:00.711861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.388 #22 NEW cov: 12373 ft: 15005 corp: 11/244b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeBit- 00:06:53.388 [2024-12-12 06:45:00.751699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0074 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.388 [2024-12-12 06:45:00.751727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.388 [2024-12-12 06:45:00.751798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.388 [2024-12-12 06:45:00.751813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.388 [2024-12-12 06:45:00.751866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:c600ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.388 [2024-12-12 06:45:00.751880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.388 [2024-12-12 06:45:00.751935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.388 [2024-12-12 06:45:00.751948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.388 #23 NEW cov: 12373 ft: 15014 corp: 12/277b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeByte- 00:06:53.388 [2024-12-12 06:45:00.811733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.388 [2024-12-12 06:45:00.811760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.388 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:06:53.388 #24 NEW cov: 12396 ft: 15037 corp: 13/297b lim: 35 exec/s: 0 rss: 73Mb L: 20/33 MS: 1 CrossOver- 00:06:53.388 [2024-12-12 06:45:00.871901] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.388 [2024-12-12 06:45:00.872126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0074 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.388 [2024-12-12 06:45:00.872157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.388 [2024-12-12 06:45:00.872215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.388 [2024-12-12 06:45:00.872230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.388 [2024-12-12 06:45:00.872286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:0000ff04 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.388 [2024-12-12 06:45:00.872300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.388 [2024-12-12 06:45:00.872356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.388 [2024-12-12 06:45:00.872374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.648 #25 NEW cov: 12396 ft: 15057 corp: 14/330b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 CMP- DE: "\004\000\000\000\000\000\000\000"- 00:06:53.648 [2024-12-12 06:45:00.932055] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.648 [2024-12-12 06:45:00.932539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.648 [2024-12-12 06:45:00.932566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.648 [2024-12-12 06:45:00.932622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:ff00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.648 [2024-12-12 06:45:00.932638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.648 [2024-12-12 06:45:00.932692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.648 [2024-12-12 06:45:00.932706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.648 [2024-12-12 06:45:00.932757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.648 [2024-12-12 06:45:00.932770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:53.648 NEW_FUNC[1/1]: 0x130d148 in spdk_nvmf_ns_identify_iocs_specific /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3098 00:06:53.648 #26 NEW cov: 12412 ft: 15110 corp: 15/365b lim: 35 exec/s: 26 rss: 73Mb L: 35/35 MS: 1 CrossOver- 00:06:53.648 [2024-12-12 06:45:00.992408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0074 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.648 [2024-12-12 06:45:00.992434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.648 [2024-12-12 06:45:00.992491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.648 [2024-12-12 06:45:00.992505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.648 [2024-12-12 06:45:00.992556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.648 [2024-12-12 06:45:00.992570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.648 [2024-12-12 06:45:00.992623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.648 [2024-12-12 06:45:00.992636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.648 #27 NEW cov: 12412 ft: 15174 corp: 16/399b lim: 35 exec/s: 27 rss: 73Mb L: 34/35 MS: 1 CrossOver- 00:06:53.648 [2024-12-12 06:45:01.032243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.648 [2024-12-12 06:45:01.032269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.648 [2024-12-12 06:45:01.032323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:050000ff cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.648 [2024-12-12 06:45:01.032340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.648 #28 NEW cov: 12412 ft: 15225 corp: 17/413b lim: 35 exec/s: 28 rss: 73Mb L: 14/35 MS: 1 EraseBytes- 00:06:53.648 [2024-12-12 06:45:01.072455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.648 [2024-12-12 06:45:01.072479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.648 #29 NEW cov: 12412 ft: 15283 corp: 18/433b lim: 35 exec/s: 29 rss: 73Mb L: 20/35 MS: 1 ChangeBinInt- 00:06:53.648 [2024-12-12 06:45:01.112654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0074 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.648 [2024-12-12 06:45:01.112681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.648 [2024-12-12 06:45:01.112736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fe2d00ff cdw11:2f0072ce SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.648 [2024-12-12 06:45:01.112750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.648 #30 NEW cov: 12412 ft: 15330 corp: 19/457b lim: 35 exec/s: 30 rss: 73Mb L: 24/35 MS: 1 CrossOver- 00:06:53.908 [2024-12-12 06:45:01.172494] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.908 [2024-12-12 06:45:01.172975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.908 [2024-12-12 06:45:01.173001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.908 [2024-12-12 06:45:01.173058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:ff00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.908 [2024-12-12 06:45:01.173074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.908 [2024-12-12 06:45:01.173131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:fffe00ff cdw11:ce002d72 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.908 [2024-12-12 06:45:01.173145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.908 [2024-12-12 06:45:01.173206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:02000021 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.908 [2024-12-12 06:45:01.173219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:53.908 #31 NEW cov: 12412 ft: 15394 corp: 20/492b lim: 35 exec/s: 31 rss: 73Mb L: 35/35 MS: 1 PersAutoDict- DE: "\376-r\316/!\002\000"- 00:06:53.908 [2024-12-12 06:45:01.232875] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:53.908 [2024-12-12 06:45:01.233357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.908 [2024-12-12 06:45:01.233383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.908 [2024-12-12 06:45:01.233442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:ff00000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.908 [2024-12-12 06:45:01.233457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.908 [2024-12-12 06:45:01.233511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.908 [2024-12-12 06:45:01.233527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.908 [2024-12-12 06:45:01.233583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:02000021 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.908 [2024-12-12 06:45:01.233596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:53.908 #32 NEW cov: 12412 ft: 15412 corp: 21/527b lim: 35 exec/s: 32 rss: 73Mb L: 35/35 MS: 1 CrossOver- 00:06:53.908 [2024-12-12 06:45:01.293402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2102002f cdw11:ff0000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.908 [2024-12-12 06:45:01.293427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.908 [2024-12-12 06:45:01.293486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.908 [2024-12-12 06:45:01.293500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.908 [2024-12-12 06:45:01.293556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:050000ff cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.908 [2024-12-12 06:45:01.293570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.908 #33 NEW cov: 12412 ft: 15451 corp: 22/555b lim: 35 exec/s: 33 rss: 73Mb L: 28/35 MS: 1 PersAutoDict- DE: "\376-r\316/!\002\000"- 00:06:53.908 [2024-12-12 06:45:01.333325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.908 [2024-12-12 06:45:01.333351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.908 [2024-12-12 06:45:01.333425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.908 [2024-12-12 06:45:01.333439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.908 #34 NEW cov: 12412 ft: 15473 corp: 23/577b lim: 35 exec/s: 34 rss: 73Mb L: 22/35 MS: 1 InsertRepeatedBytes- 00:06:53.908 [2024-12-12 06:45:01.393513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0074 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.908 [2024-12-12 06:45:01.393539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.908 [2024-12-12 06:45:01.393613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.908 [2024-12-12 06:45:01.393627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.908 [2024-12-12 06:45:01.393682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.908 [2024-12-12 06:45:01.393695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.908 [2024-12-12 06:45:01.393751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.908 [2024-12-12 06:45:01.393765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.908 #35 NEW cov: 12412 ft: 15527 corp: 24/610b lim: 35 exec/s: 35 rss: 73Mb L: 33/35 MS: 1 ShuffleBytes- 00:06:54.167 [2024-12-12 06:45:01.433681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.167 [2024-12-12 06:45:01.433709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.167 [2024-12-12 06:45:01.433765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.167 [2024-12-12 06:45:01.433778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.167 [2024-12-12 06:45:01.433833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:050000ff cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.167 [2024-12-12 06:45:01.433846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.167 #36 NEW cov: 12412 ft: 15532 corp: 25/638b lim: 35 exec/s: 36 rss: 74Mb L: 28/35 MS: 1 CopyPart- 00:06:54.167 [2024-12-12 06:45:01.493551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.167 [2024-12-12 06:45:01.493576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.167 #37 NEW cov: 12412 ft: 15551 corp: 26/658b lim: 35 exec/s: 37 rss: 74Mb L: 20/35 MS: 1 InsertByte- 00:06:54.167 [2024-12-12 06:45:01.533546] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:54.167 [2024-12-12 06:45:01.533811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2d7200fe cdw11:2100ce2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.167 [2024-12-12 06:45:01.533837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.167 [2024-12-12 06:45:01.533893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.167 [2024-12-12 06:45:01.533910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.167 #38 NEW cov: 12412 ft: 15557 corp: 27/685b lim: 35 exec/s: 38 rss: 74Mb L: 27/35 MS: 1 PersAutoDict- DE: "\376-r\316/!\002\000"- 00:06:54.167 [2024-12-12 06:45:01.573778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.167 [2024-12-12 06:45:01.573804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.167 #39 NEW cov: 12412 ft: 15566 corp: 28/705b lim: 35 exec/s: 39 rss: 74Mb L: 20/35 MS: 1 ChangeBit- 00:06:54.167 [2024-12-12 06:45:01.614076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.167 [2024-12-12 06:45:01.614102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.167 [2024-12-12 06:45:01.614161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.167 [2024-12-12 06:45:01.614175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.167 #40 NEW cov: 12412 ft: 15575 corp: 29/727b lim: 35 exec/s: 40 rss: 74Mb L: 22/35 MS: 1 CMP- DE: "\001\037"- 00:06:54.167 [2024-12-12 06:45:01.654161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.167 [2024-12-12 06:45:01.654186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.167 [2024-12-12 06:45:01.654240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.167 [2024-12-12 06:45:01.654256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.426 #41 NEW cov: 12412 ft: 15578 corp: 30/749b lim: 35 exec/s: 41 rss: 74Mb L: 22/35 MS: 1 PersAutoDict- DE: "\001\037"- 00:06:54.426 [2024-12-12 06:45:01.714417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0074 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.426 [2024-12-12 06:45:01.714443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.426 [2024-12-12 06:45:01.714516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fe2d00ff cdw11:02002f21 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.426 [2024-12-12 06:45:01.714530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.426 [2024-12-12 06:45:01.714584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:2f0072ce SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.426 [2024-12-12 06:45:01.714598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.426 #42 NEW cov: 12412 ft: 15584 corp: 31/780b lim: 35 exec/s: 42 rss: 74Mb L: 31/35 MS: 1 CopyPart- 00:06:54.426 [2024-12-12 06:45:01.774438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.426 [2024-12-12 06:45:01.774463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.426 [2024-12-12 06:45:01.774517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:72ce002d cdw11:02002f21 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.426 [2024-12-12 06:45:01.774531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.426 [2024-12-12 06:45:01.774581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ff0500ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.426 [2024-12-12 06:45:01.774594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.426 #43 NEW cov: 12412 ft: 15613 corp: 32/802b lim: 35 exec/s: 43 rss: 74Mb L: 22/35 MS: 1 PersAutoDict- DE: "\376-r\316/!\002\000"- 00:06:54.426 [2024-12-12 06:45:01.834493] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:54.426 [2024-12-12 06:45:01.834763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2d7200fe cdw11:2100ce2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.426 [2024-12-12 06:45:01.834789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.426 [2024-12-12 06:45:01.834844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.426 [2024-12-12 06:45:01.834861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.426 #44 NEW cov: 12412 ft: 15662 corp: 33/829b lim: 35 exec/s: 44 rss: 74Mb L: 27/35 MS: 1 CopyPart- 00:06:54.426 [2024-12-12 06:45:01.894788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.426 [2024-12-12 06:45:01.894813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.426 [2024-12-12 06:45:01.894869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:72ce002d cdw11:02002f21 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.426 [2024-12-12 06:45:01.894886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.426 [2024-12-12 06:45:01.894939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ff0500ff cdw11:00002d00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.426 [2024-12-12 06:45:01.894952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.426 #45 NEW cov: 12412 ft: 15696 corp: 34/851b lim: 35 exec/s: 45 rss: 74Mb L: 22/35 MS: 1 ChangeByte- 00:06:54.686 #46 NEW cov: 12412 ft: 15849 corp: 35/863b lim: 35 exec/s: 23 rss: 74Mb L: 12/35 MS: 1 EraseBytes- 00:06:54.686 #46 DONE cov: 12412 ft: 15849 corp: 35/863b lim: 35 exec/s: 23 rss: 74Mb 00:06:54.686 ###### Recommended dictionary. ###### 00:06:54.686 "t\000" # Uses: 0 00:06:54.686 "\376-r\316/!\002\000" # Uses: 4 00:06:54.686 "\004\000\000\000\000\000\000\000" # Uses: 0 00:06:54.686 "\001\037" # Uses: 1 00:06:54.686 ###### End of recommended dictionary. ###### 00:06:54.686 Done 46 runs in 2 second(s) 00:06:54.686 06:45:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:54.686 06:45:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:54.686 06:45:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:54.686 06:45:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:54.686 06:45:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:54.686 06:45:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:54.686 06:45:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:54.686 06:45:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:54.686 06:45:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:54.686 06:45:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:54.686 06:45:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:54.686 06:45:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:54.686 06:45:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:54.686 06:45:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:54.686 06:45:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:54.686 06:45:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:54.686 06:45:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:54.686 06:45:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:54.686 06:45:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:54.686 [2024-12-12 06:45:02.147712] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:54.686 [2024-12-12 06:45:02.147792] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1153995 ] 00:06:54.945 [2024-12-12 06:45:02.336856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.945 [2024-12-12 06:45:02.370964] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.945 [2024-12-12 06:45:02.430194] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.945 [2024-12-12 06:45:02.446502] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:54.945 INFO: Running with entropic power schedule (0xFF, 100). 00:06:54.945 INFO: Seed: 2434270492 00:06:55.204 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:06:55.204 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:06:55.204 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:55.204 INFO: A corpus is not provided, starting from an empty corpus 00:06:55.204 #2 INITED exec/s: 0 rss: 66Mb 00:06:55.204 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:55.204 This may also happen if the target rejected all inputs we tried so far 00:06:55.463 NEW_FUNC[1/705]: 0x440c78 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:55.463 NEW_FUNC[2/705]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:55.463 #8 NEW cov: 12079 ft: 12080 corp: 2/19b lim: 20 exec/s: 0 rss: 72Mb L: 18/18 MS: 1 InsertRepeatedBytes- 00:06:55.463 #9 NEW cov: 12192 ft: 12616 corp: 3/37b lim: 20 exec/s: 0 rss: 72Mb L: 18/18 MS: 1 ChangeByte- 00:06:55.463 #10 NEW cov: 12198 ft: 12910 corp: 4/56b lim: 20 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 CopyPart- 00:06:55.722 #13 NEW cov: 12288 ft: 13550 corp: 5/64b lim: 20 exec/s: 0 rss: 73Mb L: 8/19 MS: 3 InsertByte-ChangeBit-InsertRepeatedBytes- 00:06:55.722 #14 NEW cov: 12288 ft: 13663 corp: 6/82b lim: 20 exec/s: 0 rss: 73Mb L: 18/19 MS: 1 ChangeASCIIInt- 00:06:55.722 #15 NEW cov: 12288 ft: 13730 corp: 7/100b lim: 20 exec/s: 0 rss: 73Mb L: 18/19 MS: 1 ChangeBinInt- 00:06:55.722 #16 NEW cov: 12288 ft: 13794 corp: 8/119b lim: 20 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 ShuffleBytes- 00:06:55.722 #17 NEW cov: 12288 ft: 13888 corp: 9/136b lim: 20 exec/s: 0 rss: 73Mb L: 17/19 MS: 1 EraseBytes- 00:06:55.981 #23 NEW cov: 12292 ft: 14021 corp: 10/150b lim: 20 exec/s: 0 rss: 73Mb L: 14/19 MS: 1 EraseBytes- 00:06:55.981 #24 NEW cov: 12292 ft: 14168 corp: 11/169b lim: 20 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 CopyPart- 00:06:55.981 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:06:55.981 #25 NEW cov: 12315 ft: 14208 corp: 12/187b lim: 20 exec/s: 0 rss: 73Mb L: 18/19 MS: 1 ChangeBit- 00:06:55.981 #26 NEW cov: 12315 ft: 14240 corp: 13/205b lim: 20 exec/s: 0 rss: 73Mb L: 18/19 MS: 1 ChangeBit- 00:06:56.240 #27 NEW cov: 12315 ft: 14266 corp: 14/224b lim: 20 exec/s: 27 rss: 73Mb L: 19/19 MS: 1 ChangeBit- 00:06:56.240 #28 NEW cov: 12315 ft: 14307 corp: 15/242b lim: 20 exec/s: 28 rss: 73Mb L: 18/19 MS: 1 ChangeByte- 00:06:56.240 #29 NEW cov: 12315 ft: 14324 corp: 16/261b lim: 20 exec/s: 29 rss: 73Mb L: 19/19 MS: 1 InsertByte- 00:06:56.240 #30 NEW cov: 12315 ft: 14336 corp: 17/279b lim: 20 exec/s: 30 rss: 73Mb L: 18/19 MS: 1 ShuffleBytes- 00:06:56.240 #31 NEW cov: 12315 ft: 14344 corp: 18/297b lim: 20 exec/s: 31 rss: 73Mb L: 18/19 MS: 1 ChangeBinInt- 00:06:56.240 [2024-12-12 06:45:03.726761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.240 [2024-12-12 06:45:03.726804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.240 NEW_FUNC[1/19]: 0x137c5c8 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3485 00:06:56.240 NEW_FUNC[2/19]: 0x137d148 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3427 00:06:56.240 #32 NEW cov: 12617 ft: 14689 corp: 19/315b lim: 20 exec/s: 32 rss: 73Mb L: 18/19 MS: 1 CMP- DE: "\020\000\000\000\000\000\000\000"- 00:06:56.498 #33 NEW cov: 12617 ft: 14727 corp: 20/327b lim: 20 exec/s: 33 rss: 73Mb L: 12/19 MS: 1 EraseBytes- 00:06:56.498 #34 NEW cov: 12617 ft: 14758 corp: 21/345b lim: 20 exec/s: 34 rss: 73Mb L: 18/19 MS: 1 CrossOver- 00:06:56.498 #35 NEW cov: 12617 ft: 14786 corp: 22/362b lim: 20 exec/s: 35 rss: 73Mb L: 17/19 MS: 1 ShuffleBytes- 00:06:56.498 #36 NEW cov: 12617 ft: 14820 corp: 23/379b lim: 20 exec/s: 36 rss: 74Mb L: 17/19 MS: 1 ChangeASCIIInt- 00:06:56.757 #37 NEW cov: 12617 ft: 14844 corp: 24/397b lim: 20 exec/s: 37 rss: 74Mb L: 18/19 MS: 1 ShuffleBytes- 00:06:56.757 #38 NEW cov: 12617 ft: 14864 corp: 25/412b lim: 20 exec/s: 38 rss: 74Mb L: 15/19 MS: 1 EraseBytes- 00:06:56.757 #39 NEW cov: 12617 ft: 14885 corp: 26/429b lim: 20 exec/s: 39 rss: 74Mb L: 17/19 MS: 1 ChangeBinInt- 00:06:56.757 #40 NEW cov: 12617 ft: 14894 corp: 27/447b lim: 20 exec/s: 40 rss: 74Mb L: 18/19 MS: 1 ChangeByte- 00:06:57.016 #41 NEW cov: 12617 ft: 14923 corp: 28/464b lim: 20 exec/s: 41 rss: 74Mb L: 17/19 MS: 1 ChangeBit- 00:06:57.016 #42 NEW cov: 12617 ft: 14932 corp: 29/482b lim: 20 exec/s: 42 rss: 74Mb L: 18/19 MS: 1 ChangeByte- 00:06:57.016 #43 NEW cov: 12617 ft: 14957 corp: 30/501b lim: 20 exec/s: 43 rss: 74Mb L: 19/19 MS: 1 ChangeByte- 00:06:57.016 #44 NEW cov: 12617 ft: 14990 corp: 31/518b lim: 20 exec/s: 44 rss: 74Mb L: 17/19 MS: 1 ChangeByte- 00:06:57.016 #45 NEW cov: 12617 ft: 15021 corp: 32/537b lim: 20 exec/s: 22 rss: 74Mb L: 19/19 MS: 1 InsertByte- 00:06:57.016 #45 DONE cov: 12617 ft: 15021 corp: 32/537b lim: 20 exec/s: 22 rss: 74Mb 00:06:57.016 ###### Recommended dictionary. ###### 00:06:57.016 "\020\000\000\000\000\000\000\000" # Uses: 0 00:06:57.016 ###### End of recommended dictionary. ###### 00:06:57.016 Done 45 runs in 2 second(s) 00:06:57.275 06:45:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:57.275 06:45:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:57.275 06:45:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:57.275 06:45:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:57.275 06:45:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:57.275 06:45:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:57.275 06:45:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:57.275 06:45:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:57.275 06:45:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:57.275 06:45:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:57.275 06:45:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:57.275 06:45:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:57.275 06:45:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:57.275 06:45:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:57.275 06:45:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:57.275 06:45:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:57.275 06:45:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:57.275 06:45:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:57.276 06:45:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:57.276 [2024-12-12 06:45:04.659491] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:57.276 [2024-12-12 06:45:04.659557] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1154738 ] 00:06:57.534 [2024-12-12 06:45:04.851379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.534 [2024-12-12 06:45:04.887488] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.534 [2024-12-12 06:45:04.946279] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.534 [2024-12-12 06:45:04.962579] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:57.534 INFO: Running with entropic power schedule (0xFF, 100). 00:06:57.534 INFO: Seed: 656288526 00:06:57.534 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:06:57.534 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:06:57.534 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:57.534 INFO: A corpus is not provided, starting from an empty corpus 00:06:57.534 #2 INITED exec/s: 0 rss: 65Mb 00:06:57.534 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:57.534 This may also happen if the target rejected all inputs we tried so far 00:06:57.534 [2024-12-12 06:45:05.028109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.534 [2024-12-12 06:45:05.028138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.534 [2024-12-12 06:45:05.028197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.534 [2024-12-12 06:45:05.028211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.067 NEW_FUNC[1/716]: 0x441d78 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:58.067 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:58.067 #9 NEW cov: 12159 ft: 12158 corp: 2/18b lim: 35 exec/s: 0 rss: 72Mb L: 17/17 MS: 2 CopyPart-InsertRepeatedBytes- 00:06:58.067 [2024-12-12 06:45:05.379487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.067 [2024-12-12 06:45:05.379546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.067 [2024-12-12 06:45:05.379630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.067 [2024-12-12 06:45:05.379657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.067 [2024-12-12 06:45:05.379736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.067 [2024-12-12 06:45:05.379761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.067 NEW_FUNC[1/1]: 0x19c4cf8 in nvme_qpair_is_admin_queue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1194 00:06:58.067 #13 NEW cov: 12292 ft: 13107 corp: 3/42b lim: 35 exec/s: 0 rss: 72Mb L: 24/24 MS: 4 CopyPart-InsertByte-ChangeByte-InsertRepeatedBytes- 00:06:58.067 [2024-12-12 06:45:05.429154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.067 [2024-12-12 06:45:05.429183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.067 [2024-12-12 06:45:05.429238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.067 [2024-12-12 06:45:05.429252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.067 #14 NEW cov: 12298 ft: 13367 corp: 4/59b lim: 35 exec/s: 0 rss: 72Mb L: 17/24 MS: 1 ChangeBit- 00:06:58.067 [2024-12-12 06:45:05.489337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.067 [2024-12-12 06:45:05.489365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.067 [2024-12-12 06:45:05.489422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.067 [2024-12-12 06:45:05.489436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.067 #15 NEW cov: 12383 ft: 13622 corp: 5/77b lim: 35 exec/s: 0 rss: 72Mb L: 18/24 MS: 1 CrossOver- 00:06:58.067 [2024-12-12 06:45:05.529409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.067 [2024-12-12 06:45:05.529435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.067 [2024-12-12 06:45:05.529488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000011 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.067 [2024-12-12 06:45:05.529502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.067 #16 NEW cov: 12383 ft: 13748 corp: 6/94b lim: 35 exec/s: 0 rss: 72Mb L: 17/24 MS: 1 ChangeBinInt- 00:06:58.067 [2024-12-12 06:45:05.569826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e0a8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.067 [2024-12-12 06:45:05.569853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.067 [2024-12-12 06:45:05.569909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.067 [2024-12-12 06:45:05.569923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.067 [2024-12-12 06:45:05.569977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.067 [2024-12-12 06:45:05.569991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.067 [2024-12-12 06:45:05.570045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.067 [2024-12-12 06:45:05.570058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.339 #17 NEW cov: 12383 ft: 14216 corp: 7/128b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:06:58.339 [2024-12-12 06:45:05.629966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e0a8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.339 [2024-12-12 06:45:05.629994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.339 [2024-12-12 06:45:05.630048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.339 [2024-12-12 06:45:05.630062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.339 [2024-12-12 06:45:05.630114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.339 [2024-12-12 06:45:05.630128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.339 [2024-12-12 06:45:05.630181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:20000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.339 [2024-12-12 06:45:05.630195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.339 #23 NEW cov: 12383 ft: 14263 corp: 8/162b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ChangeBit- 00:06:58.339 [2024-12-12 06:45:05.690162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.339 [2024-12-12 06:45:05.690189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.339 [2024-12-12 06:45:05.690243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.340 [2024-12-12 06:45:05.690257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.340 [2024-12-12 06:45:05.690308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.340 [2024-12-12 06:45:05.690321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.340 [2024-12-12 06:45:05.690372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.340 [2024-12-12 06:45:05.690384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.340 #24 NEW cov: 12383 ft: 14295 corp: 9/192b lim: 35 exec/s: 0 rss: 72Mb L: 30/34 MS: 1 InsertRepeatedBytes- 00:06:58.340 [2024-12-12 06:45:05.730246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:000a0a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.340 [2024-12-12 06:45:05.730272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.340 [2024-12-12 06:45:05.730325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.340 [2024-12-12 06:45:05.730339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.340 [2024-12-12 06:45:05.730392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.340 [2024-12-12 06:45:05.730405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.340 [2024-12-12 06:45:05.730457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.340 [2024-12-12 06:45:05.730470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.340 #25 NEW cov: 12383 ft: 14351 corp: 10/226b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 CopyPart- 00:06:58.340 [2024-12-12 06:45:05.790451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e0a8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.340 [2024-12-12 06:45:05.790477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.340 [2024-12-12 06:45:05.790530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.340 [2024-12-12 06:45:05.790544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.340 [2024-12-12 06:45:05.790594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.340 [2024-12-12 06:45:05.790607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.340 [2024-12-12 06:45:05.790661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.340 [2024-12-12 06:45:05.790674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.340 #26 NEW cov: 12383 ft: 14390 corp: 11/260b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ShuffleBytes- 00:06:58.340 [2024-12-12 06:45:05.830351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.340 [2024-12-12 06:45:05.830377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.340 [2024-12-12 06:45:05.830429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.340 [2024-12-12 06:45:05.830443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.340 [2024-12-12 06:45:05.830492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.340 [2024-12-12 06:45:05.830505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.599 #27 NEW cov: 12383 ft: 14422 corp: 12/284b lim: 35 exec/s: 0 rss: 72Mb L: 24/34 MS: 1 ChangeByte- 00:06:58.599 [2024-12-12 06:45:05.890450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00007100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.599 [2024-12-12 06:45:05.890478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.599 [2024-12-12 06:45:05.890530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.599 [2024-12-12 06:45:05.890544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.599 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:06:58.599 #28 NEW cov: 12406 ft: 14511 corp: 13/301b lim: 35 exec/s: 0 rss: 72Mb L: 17/34 MS: 1 ChangeByte- 00:06:58.600 [2024-12-12 06:45:05.930943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:000a0a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.600 [2024-12-12 06:45:05.930969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.600 [2024-12-12 06:45:05.931021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.600 [2024-12-12 06:45:05.931035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.600 [2024-12-12 06:45:05.931086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:40000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.600 [2024-12-12 06:45:05.931099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.600 [2024-12-12 06:45:05.931153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.600 [2024-12-12 06:45:05.931166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.600 [2024-12-12 06:45:05.931238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.600 [2024-12-12 06:45:05.931255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:58.600 #29 NEW cov: 12406 ft: 14612 corp: 14/336b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 InsertByte- 00:06:58.600 [2024-12-12 06:45:05.990949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.600 [2024-12-12 06:45:05.990974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.600 [2024-12-12 06:45:05.991028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.600 [2024-12-12 06:45:05.991042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.600 [2024-12-12 06:45:05.991093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.600 [2024-12-12 06:45:05.991106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.600 [2024-12-12 06:45:05.991159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.600 [2024-12-12 06:45:05.991188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.600 #30 NEW cov: 12406 ft: 14617 corp: 15/364b lim: 35 exec/s: 30 rss: 73Mb L: 28/35 MS: 1 InsertRepeatedBytes- 00:06:58.600 [2024-12-12 06:45:06.030745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:002b0a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.600 [2024-12-12 06:45:06.030770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.600 [2024-12-12 06:45:06.030824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:11000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.600 [2024-12-12 06:45:06.030838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.600 #31 NEW cov: 12406 ft: 14627 corp: 16/382b lim: 35 exec/s: 31 rss: 73Mb L: 18/35 MS: 1 InsertByte- 00:06:58.600 [2024-12-12 06:45:06.091237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e0a8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.600 [2024-12-12 06:45:06.091263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.600 [2024-12-12 06:45:06.091332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.600 [2024-12-12 06:45:06.091345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.600 [2024-12-12 06:45:06.091398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.600 [2024-12-12 06:45:06.091412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.600 [2024-12-12 06:45:06.091464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.600 [2024-12-12 06:45:06.091477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.859 #32 NEW cov: 12406 ft: 14643 corp: 17/416b lim: 35 exec/s: 32 rss: 73Mb L: 34/35 MS: 1 ChangeBinInt- 00:06:58.859 [2024-12-12 06:45:06.151406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.859 [2024-12-12 06:45:06.151435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.860 [2024-12-12 06:45:06.151488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.860 [2024-12-12 06:45:06.151501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.860 [2024-12-12 06:45:06.151553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.860 [2024-12-12 06:45:06.151582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.860 [2024-12-12 06:45:06.151634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.860 [2024-12-12 06:45:06.151647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.860 #33 NEW cov: 12406 ft: 14647 corp: 18/444b lim: 35 exec/s: 33 rss: 73Mb L: 28/35 MS: 1 ShuffleBytes- 00:06:58.860 [2024-12-12 06:45:06.211273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00010a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.860 [2024-12-12 06:45:06.211298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.860 [2024-12-12 06:45:06.211368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.860 [2024-12-12 06:45:06.211382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.860 #34 NEW cov: 12406 ft: 14731 corp: 19/461b lim: 35 exec/s: 34 rss: 73Mb L: 17/35 MS: 1 ChangeBit- 00:06:58.860 [2024-12-12 06:45:06.251678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:000a0a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.860 [2024-12-12 06:45:06.251703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.860 [2024-12-12 06:45:06.251775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.860 [2024-12-12 06:45:06.251789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.860 [2024-12-12 06:45:06.251844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.860 [2024-12-12 06:45:06.251857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.860 [2024-12-12 06:45:06.251911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.860 [2024-12-12 06:45:06.251924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.860 #35 NEW cov: 12406 ft: 14773 corp: 20/491b lim: 35 exec/s: 35 rss: 73Mb L: 30/35 MS: 1 EraseBytes- 00:06:58.860 [2024-12-12 06:45:06.291806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.860 [2024-12-12 06:45:06.291832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.860 [2024-12-12 06:45:06.291885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:1b1f1b1b cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.860 [2024-12-12 06:45:06.291903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.860 [2024-12-12 06:45:06.291956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.860 [2024-12-12 06:45:06.291971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.860 [2024-12-12 06:45:06.292024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.860 [2024-12-12 06:45:06.292038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.860 #36 NEW cov: 12406 ft: 14780 corp: 21/519b lim: 35 exec/s: 36 rss: 73Mb L: 28/35 MS: 1 ChangeBit- 00:06:58.860 [2024-12-12 06:45:06.331577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00010a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.860 [2024-12-12 06:45:06.331602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.860 [2024-12-12 06:45:06.331670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.860 [2024-12-12 06:45:06.331684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.860 #37 NEW cov: 12406 ft: 14833 corp: 22/537b lim: 35 exec/s: 37 rss: 73Mb L: 18/35 MS: 1 InsertByte- 00:06:59.120 [2024-12-12 06:45:06.391792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:002b0a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.391818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.120 [2024-12-12 06:45:06.391870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.391884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.120 #38 NEW cov: 12406 ft: 14861 corp: 23/555b lim: 35 exec/s: 38 rss: 73Mb L: 18/35 MS: 1 CrossOver- 00:06:59.120 [2024-12-12 06:45:06.452405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:002b0a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.452431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.120 [2024-12-12 06:45:06.452485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:11000000 cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.452499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.120 [2024-12-12 06:45:06.452549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.452579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.120 [2024-12-12 06:45:06.452631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.452644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.120 [2024-12-12 06:45:06.452697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.452714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.120 #39 NEW cov: 12406 ft: 14886 corp: 24/590b lim: 35 exec/s: 39 rss: 73Mb L: 35/35 MS: 1 CrossOver- 00:06:59.120 [2024-12-12 06:45:06.492347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:000a0a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.492372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.120 [2024-12-12 06:45:06.492442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.492456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.120 [2024-12-12 06:45:06.492510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.492523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.120 [2024-12-12 06:45:06.492576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.492589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.120 #40 NEW cov: 12406 ft: 14919 corp: 25/623b lim: 35 exec/s: 40 rss: 73Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:06:59.120 [2024-12-12 06:45:06.552726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:002b0a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.552751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.120 [2024-12-12 06:45:06.552803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:11000000 cdw11:fdf50000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.552817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.120 [2024-12-12 06:45:06.552868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.552881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.120 [2024-12-12 06:45:06.552931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.552943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.120 [2024-12-12 06:45:06.552995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.553009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.120 #41 NEW cov: 12406 ft: 14931 corp: 26/658b lim: 35 exec/s: 41 rss: 73Mb L: 35/35 MS: 1 ChangeBinInt- 00:06:59.120 [2024-12-12 06:45:06.612907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:002b0a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.612932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.120 [2024-12-12 06:45:06.612989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:11000000 cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.613005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.120 [2024-12-12 06:45:06.613059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.613072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.120 [2024-12-12 06:45:06.613124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:4e000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.613137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.120 [2024-12-12 06:45:06.613194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.120 [2024-12-12 06:45:06.613208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.120 #42 NEW cov: 12406 ft: 14946 corp: 27/693b lim: 35 exec/s: 42 rss: 73Mb L: 35/35 MS: 1 ChangeByte- 00:06:59.381 [2024-12-12 06:45:06.652796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.652822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.381 [2024-12-12 06:45:06.652877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.652891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.381 [2024-12-12 06:45:06.652941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00002b00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.652955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.381 [2024-12-12 06:45:06.653007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.653020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.381 #43 NEW cov: 12406 ft: 14989 corp: 28/722b lim: 35 exec/s: 43 rss: 73Mb L: 29/35 MS: 1 InsertRepeatedBytes- 00:06:59.381 [2024-12-12 06:45:06.692767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.692794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.381 [2024-12-12 06:45:06.692848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.692862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.381 [2024-12-12 06:45:06.692915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.692929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.381 #44 NEW cov: 12406 ft: 15016 corp: 29/746b lim: 35 exec/s: 44 rss: 73Mb L: 24/35 MS: 1 ChangeBinInt- 00:06:59.381 [2024-12-12 06:45:06.752832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00003003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.752860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.381 [2024-12-12 06:45:06.752930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.752944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.381 #54 NEW cov: 12406 ft: 15019 corp: 30/765b lim: 35 exec/s: 54 rss: 74Mb L: 19/35 MS: 5 CopyPart-ChangeBit-ChangeBit-InsertByte-InsertRepeatedBytes- 00:06:59.381 [2024-12-12 06:45:06.793203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.793229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.381 [2024-12-12 06:45:06.793299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.793313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.381 [2024-12-12 06:45:06.793368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.793381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.381 [2024-12-12 06:45:06.793433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.793446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.381 #55 NEW cov: 12406 ft: 15080 corp: 31/794b lim: 35 exec/s: 55 rss: 74Mb L: 29/35 MS: 1 EraseBytes- 00:06:59.381 [2024-12-12 06:45:06.833500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.833526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.381 [2024-12-12 06:45:06.833597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.833612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.381 [2024-12-12 06:45:06.833664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.833678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.381 [2024-12-12 06:45:06.833733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.833747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.381 [2024-12-12 06:45:06.833801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.833815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.381 #56 NEW cov: 12406 ft: 15084 corp: 32/829b lim: 35 exec/s: 56 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:59.381 [2024-12-12 06:45:06.893700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.893729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.381 [2024-12-12 06:45:06.893799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.893813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.381 [2024-12-12 06:45:06.893866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fffffffe cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.893879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.381 [2024-12-12 06:45:06.893933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.893946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.381 [2024-12-12 06:45:06.893999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.381 [2024-12-12 06:45:06.894012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.641 #57 NEW cov: 12406 ft: 15097 corp: 33/864b lim: 35 exec/s: 57 rss: 74Mb L: 35/35 MS: 1 ChangeBit- 00:06:59.641 [2024-12-12 06:45:06.953818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:002b0a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.641 [2024-12-12 06:45:06.953844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.641 [2024-12-12 06:45:06.953914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:11000000 cdw11:fdf50000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.641 [2024-12-12 06:45:06.953928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.641 [2024-12-12 06:45:06.953984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.641 [2024-12-12 06:45:06.953997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.641 [2024-12-12 06:45:06.954049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.641 [2024-12-12 06:45:06.954063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.641 [2024-12-12 06:45:06.954116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.641 [2024-12-12 06:45:06.954129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.641 #58 NEW cov: 12406 ft: 15105 corp: 34/899b lim: 35 exec/s: 58 rss: 74Mb L: 35/35 MS: 1 ShuffleBytes- 00:06:59.642 [2024-12-12 06:45:07.013911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.642 [2024-12-12 06:45:07.013937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.642 [2024-12-12 06:45:07.014005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.642 [2024-12-12 06:45:07.014022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.642 [2024-12-12 06:45:07.014076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.642 [2024-12-12 06:45:07.014089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.642 [2024-12-12 06:45:07.014144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:1b1b1b1b cdw11:1b1b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.642 [2024-12-12 06:45:07.014162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.642 #59 NEW cov: 12406 ft: 15111 corp: 35/932b lim: 35 exec/s: 29 rss: 74Mb L: 33/35 MS: 1 EraseBytes- 00:06:59.642 #59 DONE cov: 12406 ft: 15111 corp: 35/932b lim: 35 exec/s: 29 rss: 74Mb 00:06:59.642 Done 59 runs in 2 second(s) 00:06:59.642 06:45:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:59.642 06:45:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:59.642 06:45:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:59.642 06:45:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:59.642 06:45:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:59.642 06:45:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:59.642 06:45:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:59.642 06:45:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:59.642 06:45:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:59.642 06:45:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:59.642 06:45:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:59.642 06:45:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:59.642 06:45:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:59.642 06:45:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:59.642 06:45:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:59.642 06:45:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:59.642 06:45:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:59.642 06:45:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:59.642 06:45:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:59.901 [2024-12-12 06:45:07.186893] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:59.901 [2024-12-12 06:45:07.186978] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1155482 ] 00:06:59.901 [2024-12-12 06:45:07.377581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.901 [2024-12-12 06:45:07.410654] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.160 [2024-12-12 06:45:07.469598] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.160 [2024-12-12 06:45:07.485928] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:07:00.160 INFO: Running with entropic power schedule (0xFF, 100). 00:07:00.160 INFO: Seed: 3179308532 00:07:00.160 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:00.160 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:00.160 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:00.160 INFO: A corpus is not provided, starting from an empty corpus 00:07:00.160 #2 INITED exec/s: 0 rss: 65Mb 00:07:00.160 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:00.160 This may also happen if the target rejected all inputs we tried so far 00:07:00.160 [2024-12-12 06:45:07.541271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.160 [2024-12-12 06:45:07.541298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.420 NEW_FUNC[1/717]: 0x443f18 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:00.420 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:00.420 #3 NEW cov: 12191 ft: 12186 corp: 2/18b lim: 45 exec/s: 0 rss: 72Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:07:00.420 [2024-12-12 06:45:07.852335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.420 [2024-12-12 06:45:07.852373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.420 [2024-12-12 06:45:07.852444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.420 [2024-12-12 06:45:07.852462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.420 #8 NEW cov: 12304 ft: 13484 corp: 3/37b lim: 45 exec/s: 0 rss: 72Mb L: 19/19 MS: 5 ChangeBit-CopyPart-ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:07:00.420 [2024-12-12 06:45:07.892145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.420 [2024-12-12 06:45:07.892175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.420 #9 NEW cov: 12310 ft: 13791 corp: 4/52b lim: 45 exec/s: 0 rss: 72Mb L: 15/19 MS: 1 CrossOver- 00:07:00.420 [2024-12-12 06:45:07.932291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.420 [2024-12-12 06:45:07.932317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.679 #10 NEW cov: 12395 ft: 13983 corp: 5/67b lim: 45 exec/s: 0 rss: 72Mb L: 15/19 MS: 1 ChangeBit- 00:07:00.679 [2024-12-12 06:45:07.992454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.679 [2024-12-12 06:45:07.992479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.679 #11 NEW cov: 12395 ft: 14097 corp: 6/82b lim: 45 exec/s: 0 rss: 72Mb L: 15/19 MS: 1 ShuffleBytes- 00:07:00.679 [2024-12-12 06:45:08.032554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.679 [2024-12-12 06:45:08.032580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.679 #12 NEW cov: 12395 ft: 14250 corp: 7/99b lim: 45 exec/s: 0 rss: 72Mb L: 17/19 MS: 1 ShuffleBytes- 00:07:00.679 [2024-12-12 06:45:08.092750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.679 [2024-12-12 06:45:08.092774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.679 #13 NEW cov: 12395 ft: 14354 corp: 8/114b lim: 45 exec/s: 0 rss: 72Mb L: 15/19 MS: 1 ChangeByte- 00:07:00.679 [2024-12-12 06:45:08.132813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.679 [2024-12-12 06:45:08.132839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.679 #14 NEW cov: 12395 ft: 14382 corp: 9/129b lim: 45 exec/s: 0 rss: 72Mb L: 15/19 MS: 1 ChangeBinInt- 00:07:00.679 [2024-12-12 06:45:08.193354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.679 [2024-12-12 06:45:08.193380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.679 [2024-12-12 06:45:08.193435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.679 [2024-12-12 06:45:08.193448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.679 [2024-12-12 06:45:08.193502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.679 [2024-12-12 06:45:08.193515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.942 #15 NEW cov: 12395 ft: 14746 corp: 10/156b lim: 45 exec/s: 0 rss: 72Mb L: 27/27 MS: 1 CrossOver- 00:07:00.942 [2024-12-12 06:45:08.233091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.942 [2024-12-12 06:45:08.233117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.942 #16 NEW cov: 12395 ft: 14802 corp: 11/172b lim: 45 exec/s: 0 rss: 72Mb L: 16/27 MS: 1 InsertByte- 00:07:00.942 [2024-12-12 06:45:08.273392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.942 [2024-12-12 06:45:08.273418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.942 [2024-12-12 06:45:08.273490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:4d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.942 [2024-12-12 06:45:08.273503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.942 #17 NEW cov: 12395 ft: 14841 corp: 12/193b lim: 45 exec/s: 0 rss: 72Mb L: 21/27 MS: 1 CopyPart- 00:07:00.942 [2024-12-12 06:45:08.333409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.942 [2024-12-12 06:45:08.333434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.942 #18 NEW cov: 12395 ft: 14860 corp: 13/208b lim: 45 exec/s: 0 rss: 73Mb L: 15/27 MS: 1 ChangeBinInt- 00:07:00.942 [2024-12-12 06:45:08.393741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.942 [2024-12-12 06:45:08.393766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.942 [2024-12-12 06:45:08.393837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff00ffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.942 [2024-12-12 06:45:08.393854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.942 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:00.942 #19 NEW cov: 12418 ft: 14912 corp: 14/234b lim: 45 exec/s: 0 rss: 73Mb L: 26/27 MS: 1 InsertRepeatedBytes- 00:07:00.942 [2024-12-12 06:45:08.453732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.942 [2024-12-12 06:45:08.453757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.202 #20 NEW cov: 12418 ft: 14931 corp: 15/249b lim: 45 exec/s: 0 rss: 73Mb L: 15/27 MS: 1 ChangeBit- 00:07:01.202 [2024-12-12 06:45:08.494022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff0000fc cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.202 [2024-12-12 06:45:08.494049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.202 [2024-12-12 06:45:08.494101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff00ffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.202 [2024-12-12 06:45:08.494115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.202 #21 NEW cov: 12418 ft: 14937 corp: 16/275b lim: 45 exec/s: 21 rss: 73Mb L: 26/27 MS: 1 ChangeBinInt- 00:07:01.202 [2024-12-12 06:45:08.554008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.202 [2024-12-12 06:45:08.554033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.202 #22 NEW cov: 12418 ft: 15004 corp: 17/290b lim: 45 exec/s: 22 rss: 73Mb L: 15/27 MS: 1 CopyPart- 00:07:01.202 [2024-12-12 06:45:08.594597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff0000fc cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.202 [2024-12-12 06:45:08.594621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.202 [2024-12-12 06:45:08.594690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000043 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.202 [2024-12-12 06:45:08.594704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.202 [2024-12-12 06:45:08.594758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.202 [2024-12-12 06:45:08.594771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.202 [2024-12-12 06:45:08.594823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:0000004d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.202 [2024-12-12 06:45:08.594836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.202 #23 NEW cov: 12418 ft: 15371 corp: 18/326b lim: 45 exec/s: 23 rss: 73Mb L: 36/36 MS: 1 CrossOver- 00:07:01.202 [2024-12-12 06:45:08.654475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff0000fc cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.202 [2024-12-12 06:45:08.654501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.202 [2024-12-12 06:45:08.654554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff00ffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.202 [2024-12-12 06:45:08.654571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.202 #24 NEW cov: 12418 ft: 15392 corp: 19/352b lim: 45 exec/s: 24 rss: 73Mb L: 26/36 MS: 1 ChangeBit- 00:07:01.202 [2024-12-12 06:45:08.694389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.202 [2024-12-12 06:45:08.694415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.202 #25 NEW cov: 12418 ft: 15455 corp: 20/368b lim: 45 exec/s: 25 rss: 73Mb L: 16/36 MS: 1 InsertRepeatedBytes- 00:07:01.462 [2024-12-12 06:45:08.734724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.462 [2024-12-12 06:45:08.734750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.462 [2024-12-12 06:45:08.734806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.462 [2024-12-12 06:45:08.734819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.462 #26 NEW cov: 12418 ft: 15468 corp: 21/390b lim: 45 exec/s: 26 rss: 73Mb L: 22/36 MS: 1 CopyPart- 00:07:01.462 [2024-12-12 06:45:08.774665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.462 [2024-12-12 06:45:08.774690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.462 #27 NEW cov: 12418 ft: 15477 corp: 22/401b lim: 45 exec/s: 27 rss: 73Mb L: 11/36 MS: 1 EraseBytes- 00:07:01.462 [2024-12-12 06:45:08.814918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00002000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.462 [2024-12-12 06:45:08.814943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.462 [2024-12-12 06:45:08.815012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:4d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.462 [2024-12-12 06:45:08.815026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.462 #28 NEW cov: 12418 ft: 15556 corp: 23/422b lim: 45 exec/s: 28 rss: 73Mb L: 21/36 MS: 1 ChangeBit- 00:07:01.462 [2024-12-12 06:45:08.855046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.462 [2024-12-12 06:45:08.855070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.462 [2024-12-12 06:45:08.855123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.462 [2024-12-12 06:45:08.855137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.462 #29 NEW cov: 12418 ft: 15565 corp: 24/441b lim: 45 exec/s: 29 rss: 73Mb L: 19/36 MS: 1 ChangeBinInt- 00:07:01.462 [2024-12-12 06:45:08.915105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.462 [2024-12-12 06:45:08.915130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.462 #30 NEW cov: 12418 ft: 15578 corp: 25/456b lim: 45 exec/s: 30 rss: 73Mb L: 15/36 MS: 1 ChangeBinInt- 00:07:01.462 [2024-12-12 06:45:08.955141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.462 [2024-12-12 06:45:08.955171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.462 #31 NEW cov: 12418 ft: 15586 corp: 26/471b lim: 45 exec/s: 31 rss: 73Mb L: 15/36 MS: 1 CopyPart- 00:07:01.723 [2024-12-12 06:45:08.995275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.723 [2024-12-12 06:45:08.995300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.723 #32 NEW cov: 12418 ft: 15590 corp: 27/486b lim: 45 exec/s: 32 rss: 73Mb L: 15/36 MS: 1 CrossOver- 00:07:01.723 [2024-12-12 06:45:09.035378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:ce000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.723 [2024-12-12 06:45:09.035403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.723 #33 NEW cov: 12418 ft: 15633 corp: 28/501b lim: 45 exec/s: 33 rss: 73Mb L: 15/36 MS: 1 ChangeByte- 00:07:01.723 [2024-12-12 06:45:09.095618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.723 [2024-12-12 06:45:09.095642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.723 #34 NEW cov: 12418 ft: 15642 corp: 29/517b lim: 45 exec/s: 34 rss: 73Mb L: 16/36 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\001"- 00:07:01.723 [2024-12-12 06:45:09.155751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.723 [2024-12-12 06:45:09.155775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.723 #35 NEW cov: 12418 ft: 15653 corp: 30/532b lim: 45 exec/s: 35 rss: 74Mb L: 15/36 MS: 1 ChangeByte- 00:07:01.724 [2024-12-12 06:45:09.215913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.724 [2024-12-12 06:45:09.215938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.984 #36 NEW cov: 12418 ft: 15710 corp: 31/548b lim: 45 exec/s: 36 rss: 74Mb L: 16/36 MS: 1 ChangeBit- 00:07:01.984 [2024-12-12 06:45:09.276417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.984 [2024-12-12 06:45:09.276442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.984 [2024-12-12 06:45:09.276496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:000a0000 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.984 [2024-12-12 06:45:09.276510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.984 [2024-12-12 06:45:09.276565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.984 [2024-12-12 06:45:09.276578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.984 #37 NEW cov: 12418 ft: 15718 corp: 32/578b lim: 45 exec/s: 37 rss: 74Mb L: 30/36 MS: 1 CrossOver- 00:07:01.984 [2024-12-12 06:45:09.316422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:25002000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.984 [2024-12-12 06:45:09.316447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.984 [2024-12-12 06:45:09.316504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:004d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.984 [2024-12-12 06:45:09.316518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.984 #38 NEW cov: 12418 ft: 15722 corp: 33/600b lim: 45 exec/s: 38 rss: 74Mb L: 22/36 MS: 1 InsertByte- 00:07:01.984 [2024-12-12 06:45:09.376424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.984 [2024-12-12 06:45:09.376450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.984 #39 NEW cov: 12418 ft: 15771 corp: 34/615b lim: 45 exec/s: 39 rss: 74Mb L: 15/36 MS: 1 ChangeBinInt- 00:07:01.984 [2024-12-12 06:45:09.416751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00004f0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.984 [2024-12-12 06:45:09.416776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.984 [2024-12-12 06:45:09.416834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0aff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.984 [2024-12-12 06:45:09.416847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.984 [2024-12-12 06:45:09.416900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.984 [2024-12-12 06:45:09.416914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.984 #40 NEW cov: 12418 ft: 15776 corp: 35/646b lim: 45 exec/s: 40 rss: 74Mb L: 31/36 MS: 1 InsertByte- 00:07:01.984 [2024-12-12 06:45:09.476757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.984 [2024-12-12 06:45:09.476782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.984 [2024-12-12 06:45:09.476838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:4d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.984 [2024-12-12 06:45:09.476851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.984 #41 NEW cov: 12418 ft: 15823 corp: 36/667b lim: 45 exec/s: 41 rss: 74Mb L: 21/36 MS: 1 CopyPart- 00:07:02.244 [2024-12-12 06:45:09.516675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.244 [2024-12-12 06:45:09.516700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.244 #42 NEW cov: 12418 ft: 15869 corp: 37/678b lim: 45 exec/s: 21 rss: 74Mb L: 11/36 MS: 1 EraseBytes- 00:07:02.244 #42 DONE cov: 12418 ft: 15869 corp: 37/678b lim: 45 exec/s: 21 rss: 74Mb 00:07:02.244 ###### Recommended dictionary. ###### 00:07:02.244 "\000\000\000\000\000\000\000\001" # Uses: 0 00:07:02.244 ###### End of recommended dictionary. ###### 00:07:02.244 Done 42 runs in 2 second(s) 00:07:02.244 06:45:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:07:02.244 06:45:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:02.244 06:45:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:02.244 06:45:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:02.244 06:45:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:02.244 06:45:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:02.244 06:45:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:02.244 06:45:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:02.244 06:45:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:02.244 06:45:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:02.244 06:45:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:02.244 06:45:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:07:02.244 06:45:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:07:02.244 06:45:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:02.244 06:45:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:02.244 06:45:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:02.244 06:45:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:02.244 06:45:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:02.244 06:45:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:07:02.244 [2024-12-12 06:45:09.687643] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:02.244 [2024-12-12 06:45:09.687715] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1155785 ] 00:07:02.503 [2024-12-12 06:45:09.882245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.503 [2024-12-12 06:45:09.915953] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.503 [2024-12-12 06:45:09.975067] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.504 [2024-12-12 06:45:09.991384] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:02.504 INFO: Running with entropic power schedule (0xFF, 100). 00:07:02.504 INFO: Seed: 1390368068 00:07:02.763 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:02.763 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:02.763 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:02.763 INFO: A corpus is not provided, starting from an empty corpus 00:07:02.763 #2 INITED exec/s: 0 rss: 66Mb 00:07:02.763 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:02.763 This may also happen if the target rejected all inputs we tried so far 00:07:02.763 [2024-12-12 06:45:10.067737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a46 cdw11:00000000 00:07:02.763 [2024-12-12 06:45:10.067779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.022 NEW_FUNC[1/715]: 0x446728 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:03.022 NEW_FUNC[2/715]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:03.022 #8 NEW cov: 12108 ft: 12107 corp: 2/3b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 InsertByte- 00:07:03.022 [2024-12-12 06:45:10.418655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:03.022 [2024-12-12 06:45:10.418697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.022 #9 NEW cov: 12221 ft: 12552 corp: 3/6b lim: 10 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 InsertByte- 00:07:03.022 [2024-12-12 06:45:10.488819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:03.022 [2024-12-12 06:45:10.488853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.023 #10 NEW cov: 12227 ft: 12856 corp: 4/9b lim: 10 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 ChangeBit- 00:07:03.282 [2024-12-12 06:45:10.558993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:03.282 [2024-12-12 06:45:10.559024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.282 #11 NEW cov: 12312 ft: 13160 corp: 5/12b lim: 10 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 CrossOver- 00:07:03.282 [2024-12-12 06:45:10.629836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afb cdw11:00000000 00:07:03.282 [2024-12-12 06:45:10.629865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.282 [2024-12-12 06:45:10.629987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fbfb cdw11:00000000 00:07:03.282 [2024-12-12 06:45:10.630004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.282 [2024-12-12 06:45:10.630131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000fbfb cdw11:00000000 00:07:03.282 [2024-12-12 06:45:10.630147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.282 [2024-12-12 06:45:10.630281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000fbfb cdw11:00000000 00:07:03.282 [2024-12-12 06:45:10.630299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.282 #12 NEW cov: 12312 ft: 13741 corp: 6/20b lim: 10 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:03.282 [2024-12-12 06:45:10.680003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007e02 cdw11:00000000 00:07:03.282 [2024-12-12 06:45:10.680032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.282 [2024-12-12 06:45:10.680153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:03.282 [2024-12-12 06:45:10.680172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.282 [2024-12-12 06:45:10.680289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:03.282 [2024-12-12 06:45:10.680308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.282 [2024-12-12 06:45:10.680440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:03.282 [2024-12-12 06:45:10.680459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.282 #14 NEW cov: 12312 ft: 13939 corp: 7/29b lim: 10 exec/s: 0 rss: 73Mb L: 9/9 MS: 2 ChangeByte-CMP- DE: "\002\000\000\000\000\000\000\000"- 00:07:03.282 [2024-12-12 06:45:10.729721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a7e cdw11:00000000 00:07:03.283 [2024-12-12 06:45:10.729750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.283 [2024-12-12 06:45:10.729871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000000 00:07:03.283 [2024-12-12 06:45:10.729892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.283 #15 NEW cov: 12312 ft: 14161 corp: 8/33b lim: 10 exec/s: 0 rss: 73Mb L: 4/9 MS: 1 CrossOver- 00:07:03.283 [2024-12-12 06:45:10.780292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:03.283 [2024-12-12 06:45:10.780320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.283 [2024-12-12 06:45:10.780440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:03.283 [2024-12-12 06:45:10.780457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.283 [2024-12-12 06:45:10.780584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:03.283 [2024-12-12 06:45:10.780601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.283 [2024-12-12 06:45:10.780730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:03.283 [2024-12-12 06:45:10.780746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.283 #16 NEW cov: 12312 ft: 14233 corp: 9/42b lim: 10 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:03.542 [2024-12-12 06:45:10.830049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001a7e cdw11:00000000 00:07:03.542 [2024-12-12 06:45:10.830078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.542 [2024-12-12 06:45:10.830210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000000 00:07:03.542 [2024-12-12 06:45:10.830225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.542 #17 NEW cov: 12312 ft: 14259 corp: 10/46b lim: 10 exec/s: 0 rss: 73Mb L: 4/9 MS: 1 ChangeBit- 00:07:03.542 [2024-12-12 06:45:10.900733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007e02 cdw11:00000000 00:07:03.542 [2024-12-12 06:45:10.900772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.542 [2024-12-12 06:45:10.900899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:03.542 [2024-12-12 06:45:10.900917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.542 [2024-12-12 06:45:10.901051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:03.542 [2024-12-12 06:45:10.901067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.542 [2024-12-12 06:45:10.901198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:03.542 [2024-12-12 06:45:10.901214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.542 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:03.542 #18 NEW cov: 12329 ft: 14346 corp: 11/55b lim: 10 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:03.542 [2024-12-12 06:45:10.970499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff7e cdw11:00000000 00:07:03.542 [2024-12-12 06:45:10.970526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.542 [2024-12-12 06:45:10.970653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000000 00:07:03.542 [2024-12-12 06:45:10.970670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.542 #19 NEW cov: 12329 ft: 14362 corp: 12/59b lim: 10 exec/s: 0 rss: 73Mb L: 4/9 MS: 1 ChangeBinInt- 00:07:03.542 [2024-12-12 06:45:11.021090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:03.542 [2024-12-12 06:45:11.021117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.542 [2024-12-12 06:45:11.021261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00006600 cdw11:00000000 00:07:03.542 [2024-12-12 06:45:11.021278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.542 [2024-12-12 06:45:11.021406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:03.542 [2024-12-12 06:45:11.021422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.542 [2024-12-12 06:45:11.021550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:03.542 [2024-12-12 06:45:11.021568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.542 #20 NEW cov: 12329 ft: 14403 corp: 13/68b lim: 10 exec/s: 20 rss: 73Mb L: 9/9 MS: 1 CrossOver- 00:07:03.802 [2024-12-12 06:45:11.070510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001aff cdw11:00000000 00:07:03.802 [2024-12-12 06:45:11.070537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.802 #21 NEW cov: 12329 ft: 14417 corp: 14/71b lim: 10 exec/s: 21 rss: 73Mb L: 3/9 MS: 1 ChangeBit- 00:07:03.802 [2024-12-12 06:45:11.120755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a7e cdw11:00000000 00:07:03.802 [2024-12-12 06:45:11.120782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.802 #22 NEW cov: 12329 ft: 14431 corp: 15/73b lim: 10 exec/s: 22 rss: 73Mb L: 2/9 MS: 1 CrossOver- 00:07:03.802 [2024-12-12 06:45:11.170898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f9ff cdw11:00000000 00:07:03.802 [2024-12-12 06:45:11.170926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.802 #23 NEW cov: 12329 ft: 14452 corp: 16/76b lim: 10 exec/s: 23 rss: 73Mb L: 3/9 MS: 1 ChangeBinInt- 00:07:03.802 [2024-12-12 06:45:11.221436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000aeae cdw11:00000000 00:07:03.802 [2024-12-12 06:45:11.221464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.802 [2024-12-12 06:45:11.221588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000aeae cdw11:00000000 00:07:03.802 [2024-12-12 06:45:11.221605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.802 [2024-12-12 06:45:11.221735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a7e cdw11:00000000 00:07:03.802 [2024-12-12 06:45:11.221752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.802 #24 NEW cov: 12329 ft: 14582 corp: 17/82b lim: 10 exec/s: 24 rss: 73Mb L: 6/9 MS: 1 InsertRepeatedBytes- 00:07:03.802 [2024-12-12 06:45:11.292180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a3b cdw11:00000000 00:07:03.802 [2024-12-12 06:45:11.292214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.802 [2024-12-12 06:45:11.292349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:03.802 [2024-12-12 06:45:11.292369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.802 [2024-12-12 06:45:11.292501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:03.802 [2024-12-12 06:45:11.292518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.802 [2024-12-12 06:45:11.292651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:03.802 [2024-12-12 06:45:11.292671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.802 [2024-12-12 06:45:11.292794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff66 cdw11:00000000 00:07:03.802 [2024-12-12 06:45:11.292812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.061 #25 NEW cov: 12329 ft: 14659 corp: 18/92b lim: 10 exec/s: 25 rss: 73Mb L: 10/10 MS: 1 InsertByte- 00:07:04.061 [2024-12-12 06:45:11.361531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001a02 cdw11:00000000 00:07:04.061 [2024-12-12 06:45:11.361560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.061 #26 NEW cov: 12329 ft: 14706 corp: 19/95b lim: 10 exec/s: 26 rss: 73Mb L: 3/10 MS: 1 EraseBytes- 00:07:04.061 [2024-12-12 06:45:11.431695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:04.061 [2024-12-12 06:45:11.431724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.061 #27 NEW cov: 12329 ft: 14721 corp: 20/97b lim: 10 exec/s: 27 rss: 73Mb L: 2/10 MS: 1 CrossOver- 00:07:04.061 [2024-12-12 06:45:11.502643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007e02 cdw11:00000000 00:07:04.061 [2024-12-12 06:45:11.502671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.061 [2024-12-12 06:45:11.502798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:04.061 [2024-12-12 06:45:11.502816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.062 [2024-12-12 06:45:11.502940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:04.062 [2024-12-12 06:45:11.502958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.062 [2024-12-12 06:45:11.503083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:04.062 [2024-12-12 06:45:11.503102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.062 #28 NEW cov: 12329 ft: 14742 corp: 21/106b lim: 10 exec/s: 28 rss: 73Mb L: 9/10 MS: 1 ShuffleBytes- 00:07:04.062 [2024-12-12 06:45:11.552285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007fff cdw11:00000000 00:07:04.062 [2024-12-12 06:45:11.552314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.062 [2024-12-12 06:45:11.552439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007e02 cdw11:00000000 00:07:04.062 [2024-12-12 06:45:11.552460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.321 #29 NEW cov: 12329 ft: 14802 corp: 22/111b lim: 10 exec/s: 29 rss: 73Mb L: 5/10 MS: 1 InsertByte- 00:07:04.321 [2024-12-12 06:45:11.622968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afb cdw11:00000000 00:07:04.321 [2024-12-12 06:45:11.622997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.321 [2024-12-12 06:45:11.623122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fffb cdw11:00000000 00:07:04.321 [2024-12-12 06:45:11.623141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.321 [2024-12-12 06:45:11.623276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000fbfb cdw11:00000000 00:07:04.321 [2024-12-12 06:45:11.623294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.321 [2024-12-12 06:45:11.623411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000fbfb cdw11:00000000 00:07:04.321 [2024-12-12 06:45:11.623429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.321 #30 NEW cov: 12329 ft: 14853 corp: 23/119b lim: 10 exec/s: 30 rss: 74Mb L: 8/10 MS: 1 ChangeBit- 00:07:04.321 [2024-12-12 06:45:11.692755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a7e cdw11:00000000 00:07:04.321 [2024-12-12 06:45:11.692784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.321 [2024-12-12 06:45:11.692913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000002ff cdw11:00000000 00:07:04.321 [2024-12-12 06:45:11.692930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.321 #31 NEW cov: 12329 ft: 14905 corp: 24/124b lim: 10 exec/s: 31 rss: 74Mb L: 5/10 MS: 1 CrossOver- 00:07:04.321 [2024-12-12 06:45:11.743342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:04.321 [2024-12-12 06:45:11.743370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.321 [2024-12-12 06:45:11.743498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:04.321 [2024-12-12 06:45:11.743518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.321 [2024-12-12 06:45:11.743636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:04.321 [2024-12-12 06:45:11.743653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.321 [2024-12-12 06:45:11.743786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:04.321 [2024-12-12 06:45:11.743805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.321 #32 NEW cov: 12329 ft: 14907 corp: 25/133b lim: 10 exec/s: 32 rss: 74Mb L: 9/10 MS: 1 ChangeByte- 00:07:04.321 [2024-12-12 06:45:11.793481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007e02 cdw11:00000000 00:07:04.321 [2024-12-12 06:45:11.793508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.322 [2024-12-12 06:45:11.793638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000000 00:07:04.322 [2024-12-12 06:45:11.793659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.322 [2024-12-12 06:45:11.793788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:04.322 [2024-12-12 06:45:11.793805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.322 [2024-12-12 06:45:11.793935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:04.322 [2024-12-12 06:45:11.793953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.322 #33 NEW cov: 12329 ft: 14908 corp: 26/142b lim: 10 exec/s: 33 rss: 74Mb L: 9/10 MS: 1 CopyPart- 00:07:04.581 [2024-12-12 06:45:11.863906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:04.581 [2024-12-12 06:45:11.863933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.581 [2024-12-12 06:45:11.864060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:04.581 [2024-12-12 06:45:11.864077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.581 [2024-12-12 06:45:11.864200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:04.581 [2024-12-12 06:45:11.864216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.581 [2024-12-12 06:45:11.864352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:04.581 [2024-12-12 06:45:11.864370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.581 [2024-12-12 06:45:11.864490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff66 cdw11:00000000 00:07:04.581 [2024-12-12 06:45:11.864507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.581 #34 NEW cov: 12329 ft: 14912 corp: 27/152b lim: 10 exec/s: 34 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:07:04.581 [2024-12-12 06:45:11.913188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002bf3 cdw11:00000000 00:07:04.581 [2024-12-12 06:45:11.913215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.581 #39 NEW cov: 12336 ft: 14949 corp: 28/154b lim: 10 exec/s: 39 rss: 74Mb L: 2/10 MS: 5 EraseBytes-ChangeBit-ShuffleBytes-ChangeBinInt-InsertByte- 00:07:04.581 [2024-12-12 06:45:11.984038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007e30 cdw11:00000000 00:07:04.581 [2024-12-12 06:45:11.984068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.581 [2024-12-12 06:45:11.984202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:04.582 [2024-12-12 06:45:11.984220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.582 [2024-12-12 06:45:11.984357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:04.582 [2024-12-12 06:45:11.984373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.582 [2024-12-12 06:45:11.984498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:04.582 [2024-12-12 06:45:11.984517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.582 #40 NEW cov: 12336 ft: 14970 corp: 29/163b lim: 10 exec/s: 40 rss: 74Mb L: 9/10 MS: 1 ChangeByte- 00:07:04.582 [2024-12-12 06:45:12.033664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:04.582 [2024-12-12 06:45:12.033691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.582 [2024-12-12 06:45:12.033826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:04.582 [2024-12-12 06:45:12.033842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.582 #41 NEW cov: 12336 ft: 14971 corp: 30/168b lim: 10 exec/s: 20 rss: 74Mb L: 5/10 MS: 1 InsertRepeatedBytes- 00:07:04.582 #41 DONE cov: 12336 ft: 14971 corp: 30/168b lim: 10 exec/s: 20 rss: 74Mb 00:07:04.582 ###### Recommended dictionary. ###### 00:07:04.582 "\002\000\000\000\000\000\000\000" # Uses: 0 00:07:04.582 ###### End of recommended dictionary. ###### 00:07:04.582 Done 41 runs in 2 second(s) 00:07:04.841 06:45:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:07:04.841 06:45:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:04.841 06:45:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:04.841 06:45:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:04.841 06:45:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:04.841 06:45:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:04.841 06:45:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:04.841 06:45:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:04.841 06:45:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:04.841 06:45:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:04.841 06:45:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:04.841 06:45:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:07:04.841 06:45:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:07:04.841 06:45:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:04.841 06:45:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:04.841 06:45:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:04.841 06:45:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:04.841 06:45:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:04.841 06:45:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:07:04.841 [2024-12-12 06:45:12.204144] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:04.841 [2024-12-12 06:45:12.204218] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1156314 ] 00:07:05.101 [2024-12-12 06:45:12.387424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.101 [2024-12-12 06:45:12.420093] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.101 [2024-12-12 06:45:12.478877] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.101 [2024-12-12 06:45:12.495143] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:05.101 INFO: Running with entropic power schedule (0xFF, 100). 00:07:05.101 INFO: Seed: 3892351524 00:07:05.101 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:05.101 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:05.101 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:05.101 INFO: A corpus is not provided, starting from an empty corpus 00:07:05.101 #2 INITED exec/s: 0 rss: 65Mb 00:07:05.101 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:05.101 This may also happen if the target rejected all inputs we tried so far 00:07:05.101 [2024-12-12 06:45:12.540502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:05.101 [2024-12-12 06:45:12.540529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.361 NEW_FUNC[1/714]: 0x447128 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:05.361 NEW_FUNC[2/714]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:05.361 #3 NEW cov: 12106 ft: 12089 corp: 2/3b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CrossOver- 00:07:05.361 [2024-12-12 06:45:12.851269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:05.361 [2024-12-12 06:45:12.851301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.620 NEW_FUNC[1/1]: 0x17b1078 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1575 00:07:05.620 #4 NEW cov: 12221 ft: 12525 corp: 3/6b lim: 10 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 CrossOver- 00:07:05.620 [2024-12-12 06:45:12.911380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008c8c cdw11:00000000 00:07:05.620 [2024-12-12 06:45:12.911406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.620 #7 NEW cov: 12227 ft: 12972 corp: 4/8b lim: 10 exec/s: 0 rss: 72Mb L: 2/3 MS: 3 ChangeByte-ChangeBinInt-CopyPart- 00:07:05.620 [2024-12-12 06:45:12.951483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:05.620 [2024-12-12 06:45:12.951508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.620 #8 NEW cov: 12312 ft: 13192 corp: 5/11b lim: 10 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 ShuffleBytes- 00:07:05.621 [2024-12-12 06:45:13.011604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:05.621 [2024-12-12 06:45:13.011630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.621 #9 NEW cov: 12312 ft: 13248 corp: 6/14b lim: 10 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 ChangeBinInt- 00:07:05.621 [2024-12-12 06:45:13.051740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:05.621 [2024-12-12 06:45:13.051765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.621 #10 NEW cov: 12312 ft: 13285 corp: 7/16b lim: 10 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 CopyPart- 00:07:05.621 [2024-12-12 06:45:13.091842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:05.621 [2024-12-12 06:45:13.091867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.621 #14 NEW cov: 12312 ft: 13462 corp: 8/19b lim: 10 exec/s: 0 rss: 72Mb L: 3/3 MS: 4 EraseBytes-ShuffleBytes-ShuffleBytes-CMP- DE: "\377\002"- 00:07:05.621 [2024-12-12 06:45:13.131961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ae2 cdw11:00000000 00:07:05.621 [2024-12-12 06:45:13.131986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.880 #15 NEW cov: 12312 ft: 13473 corp: 9/22b lim: 10 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 InsertByte- 00:07:05.880 [2024-12-12 06:45:13.192266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:05.880 [2024-12-12 06:45:13.192291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.880 [2024-12-12 06:45:13.192358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000090a cdw11:00000000 00:07:05.880 [2024-12-12 06:45:13.192371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.880 #16 NEW cov: 12312 ft: 13734 corp: 10/27b lim: 10 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:07:05.880 [2024-12-12 06:45:13.252383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a1c cdw11:00000000 00:07:05.881 [2024-12-12 06:45:13.252408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.881 [2024-12-12 06:45:13.252476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a09 cdw11:00000000 00:07:05.881 [2024-12-12 06:45:13.252489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.881 #17 NEW cov: 12312 ft: 13778 corp: 11/31b lim: 10 exec/s: 0 rss: 73Mb L: 4/5 MS: 1 InsertByte- 00:07:05.881 [2024-12-12 06:45:13.292406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff02 cdw11:00000000 00:07:05.881 [2024-12-12 06:45:13.292431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.881 #18 NEW cov: 12312 ft: 13813 corp: 12/33b lim: 10 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 EraseBytes- 00:07:05.881 [2024-12-12 06:45:13.352541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:05.881 [2024-12-12 06:45:13.352565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.881 #19 NEW cov: 12312 ft: 13846 corp: 13/36b lim: 10 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 CrossOver- 00:07:06.140 [2024-12-12 06:45:13.412732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff02 cdw11:00000000 00:07:06.140 [2024-12-12 06:45:13.412757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.140 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:06.140 #20 NEW cov: 12335 ft: 13914 corp: 14/38b lim: 10 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 PersAutoDict- DE: "\377\002"- 00:07:06.140 [2024-12-12 06:45:13.452834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff42 cdw11:00000000 00:07:06.140 [2024-12-12 06:45:13.452858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.140 #21 NEW cov: 12335 ft: 13930 corp: 15/40b lim: 10 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 ChangeBit- 00:07:06.140 [2024-12-12 06:45:13.493087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff02 cdw11:00000000 00:07:06.140 [2024-12-12 06:45:13.493112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.140 [2024-12-12 06:45:13.493160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000aff cdw11:00000000 00:07:06.140 [2024-12-12 06:45:13.493177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.140 #22 NEW cov: 12335 ft: 13972 corp: 16/45b lim: 10 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 PersAutoDict- DE: "\377\002"- 00:07:06.140 [2024-12-12 06:45:13.533067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000a5ff cdw11:00000000 00:07:06.140 [2024-12-12 06:45:13.533092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.140 #23 NEW cov: 12335 ft: 13988 corp: 17/48b lim: 10 exec/s: 23 rss: 73Mb L: 3/5 MS: 1 InsertByte- 00:07:06.140 [2024-12-12 06:45:13.573170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff4d cdw11:00000000 00:07:06.140 [2024-12-12 06:45:13.573195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.140 #24 NEW cov: 12335 ft: 14016 corp: 18/51b lim: 10 exec/s: 24 rss: 73Mb L: 3/5 MS: 1 InsertByte- 00:07:06.140 [2024-12-12 06:45:13.633503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:06.140 [2024-12-12 06:45:13.633528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.140 [2024-12-12 06:45:13.633594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000090a cdw11:00000000 00:07:06.140 [2024-12-12 06:45:13.633608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.400 #25 NEW cov: 12335 ft: 14042 corp: 19/56b lim: 10 exec/s: 25 rss: 73Mb L: 5/5 MS: 1 ChangeBit- 00:07:06.400 [2024-12-12 06:45:13.693876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008cff cdw11:00000000 00:07:06.400 [2024-12-12 06:45:13.693901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.400 [2024-12-12 06:45:13.693953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:06.400 [2024-12-12 06:45:13.693966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.400 [2024-12-12 06:45:13.694015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:06.400 [2024-12-12 06:45:13.694044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.400 [2024-12-12 06:45:13.694095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:06.400 [2024-12-12 06:45:13.694108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.400 #29 NEW cov: 12335 ft: 14336 corp: 20/65b lim: 10 exec/s: 29 rss: 73Mb L: 9/9 MS: 4 EraseBytes-ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:07:06.400 [2024-12-12 06:45:13.753762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ae2 cdw11:00000000 00:07:06.400 [2024-12-12 06:45:13.753787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.400 [2024-12-12 06:45:13.753855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000250a cdw11:00000000 00:07:06.400 [2024-12-12 06:45:13.753869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.400 #30 NEW cov: 12335 ft: 14369 corp: 21/69b lim: 10 exec/s: 30 rss: 73Mb L: 4/9 MS: 1 InsertByte- 00:07:06.400 [2024-12-12 06:45:13.814133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:06.400 [2024-12-12 06:45:13.814165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.400 [2024-12-12 06:45:13.814244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000090a cdw11:00000000 00:07:06.400 [2024-12-12 06:45:13.814257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.400 [2024-12-12 06:45:13.814307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00004900 cdw11:00000000 00:07:06.400 [2024-12-12 06:45:13.814320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.400 [2024-12-12 06:45:13.814372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:06.400 [2024-12-12 06:45:13.814385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.400 #31 NEW cov: 12335 ft: 14385 corp: 22/77b lim: 10 exec/s: 31 rss: 73Mb L: 8/9 MS: 1 InsertRepeatedBytes- 00:07:06.400 [2024-12-12 06:45:13.874133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:06.400 [2024-12-12 06:45:13.874162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.400 [2024-12-12 06:45:13.874213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000020a cdw11:00000000 00:07:06.400 [2024-12-12 06:45:13.874227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.400 #32 NEW cov: 12335 ft: 14397 corp: 23/82b lim: 10 exec/s: 32 rss: 73Mb L: 5/9 MS: 1 PersAutoDict- DE: "\377\002"- 00:07:06.400 [2024-12-12 06:45:13.914074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004224 cdw11:00000000 00:07:06.400 [2024-12-12 06:45:13.914099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.660 #34 NEW cov: 12335 ft: 14398 corp: 24/84b lim: 10 exec/s: 34 rss: 74Mb L: 2/9 MS: 2 EraseBytes-InsertByte- 00:07:06.660 [2024-12-12 06:45:13.974393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:06.660 [2024-12-12 06:45:13.974419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.660 [2024-12-12 06:45:13.974470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000931 cdw11:00000000 00:07:06.660 [2024-12-12 06:45:13.974484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.660 #35 NEW cov: 12335 ft: 14402 corp: 25/88b lim: 10 exec/s: 35 rss: 74Mb L: 4/9 MS: 1 InsertByte- 00:07:06.660 [2024-12-12 06:45:14.014392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000423d cdw11:00000000 00:07:06.660 [2024-12-12 06:45:14.014417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.660 #36 NEW cov: 12335 ft: 14421 corp: 26/90b lim: 10 exec/s: 36 rss: 74Mb L: 2/9 MS: 1 ChangeByte- 00:07:06.660 [2024-12-12 06:45:14.074557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002a0a cdw11:00000000 00:07:06.660 [2024-12-12 06:45:14.074581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.660 #37 NEW cov: 12335 ft: 14434 corp: 27/93b lim: 10 exec/s: 37 rss: 74Mb L: 3/9 MS: 1 ChangeBit- 00:07:06.660 [2024-12-12 06:45:14.114774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:06.660 [2024-12-12 06:45:14.114801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.660 [2024-12-12 06:45:14.114867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:06.660 [2024-12-12 06:45:14.114881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.660 #38 NEW cov: 12335 ft: 14488 corp: 28/98b lim: 10 exec/s: 38 rss: 74Mb L: 5/9 MS: 1 CrossOver- 00:07:06.660 [2024-12-12 06:45:14.154887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000022a cdw11:00000000 00:07:06.660 [2024-12-12 06:45:14.154911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.660 [2024-12-12 06:45:14.154979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002a2a cdw11:00000000 00:07:06.660 [2024-12-12 06:45:14.154992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.660 #40 NEW cov: 12335 ft: 14500 corp: 29/103b lim: 10 exec/s: 40 rss: 74Mb L: 5/9 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:06.920 [2024-12-12 06:45:14.194977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000a5ff cdw11:00000000 00:07:06.920 [2024-12-12 06:45:14.195002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.920 [2024-12-12 06:45:14.195071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000002ff cdw11:00000000 00:07:06.920 [2024-12-12 06:45:14.195085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.920 #41 NEW cov: 12335 ft: 14562 corp: 30/108b lim: 10 exec/s: 41 rss: 74Mb L: 5/9 MS: 1 PersAutoDict- DE: "\377\002"- 00:07:06.920 [2024-12-12 06:45:14.255379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008cff cdw11:00000000 00:07:06.920 [2024-12-12 06:45:14.255405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.920 [2024-12-12 06:45:14.255456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:06.920 [2024-12-12 06:45:14.255470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.920 [2024-12-12 06:45:14.255519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:06.920 [2024-12-12 06:45:14.255547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.920 [2024-12-12 06:45:14.255597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffc9 cdw11:00000000 00:07:06.920 [2024-12-12 06:45:14.255610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.920 #42 NEW cov: 12335 ft: 14564 corp: 31/117b lim: 10 exec/s: 42 rss: 74Mb L: 9/9 MS: 1 ChangeByte- 00:07:06.920 [2024-12-12 06:45:14.315186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004242 cdw11:00000000 00:07:06.920 [2024-12-12 06:45:14.315211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.920 #43 NEW cov: 12335 ft: 14588 corp: 32/119b lim: 10 exec/s: 43 rss: 74Mb L: 2/9 MS: 1 CopyPart- 00:07:06.920 [2024-12-12 06:45:14.355450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001c0a cdw11:00000000 00:07:06.920 [2024-12-12 06:45:14.355474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.920 [2024-12-12 06:45:14.355525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000909 cdw11:00000000 00:07:06.920 [2024-12-12 06:45:14.355541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.920 #44 NEW cov: 12335 ft: 14603 corp: 33/123b lim: 10 exec/s: 44 rss: 74Mb L: 4/9 MS: 1 CopyPart- 00:07:06.920 [2024-12-12 06:45:14.415593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:06.920 [2024-12-12 06:45:14.415618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.920 [2024-12-12 06:45:14.415667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:06.920 [2024-12-12 06:45:14.415680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.920 #45 NEW cov: 12335 ft: 14625 corp: 34/128b lim: 10 exec/s: 45 rss: 74Mb L: 5/9 MS: 1 InsertRepeatedBytes- 00:07:07.180 [2024-12-12 06:45:14.455589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ab3 cdw11:00000000 00:07:07.180 [2024-12-12 06:45:14.455615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.180 #47 NEW cov: 12335 ft: 14641 corp: 35/130b lim: 10 exec/s: 47 rss: 74Mb L: 2/9 MS: 2 CopyPart-InsertByte- 00:07:07.180 [2024-12-12 06:45:14.495872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003b0a cdw11:00000000 00:07:07.180 [2024-12-12 06:45:14.495897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.180 [2024-12-12 06:45:14.495948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e20a cdw11:00000000 00:07:07.180 [2024-12-12 06:45:14.495961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.180 #48 NEW cov: 12335 ft: 14727 corp: 36/134b lim: 10 exec/s: 48 rss: 74Mb L: 4/9 MS: 1 InsertByte- 00:07:07.180 [2024-12-12 06:45:14.535831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004242 cdw11:00000000 00:07:07.180 [2024-12-12 06:45:14.535856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.180 #49 NEW cov: 12335 ft: 14752 corp: 37/136b lim: 10 exec/s: 24 rss: 74Mb L: 2/9 MS: 1 ShuffleBytes- 00:07:07.180 #49 DONE cov: 12335 ft: 14752 corp: 37/136b lim: 10 exec/s: 24 rss: 74Mb 00:07:07.180 ###### Recommended dictionary. ###### 00:07:07.180 "\377\002" # Uses: 4 00:07:07.180 ###### End of recommended dictionary. ###### 00:07:07.180 Done 49 runs in 2 second(s) 00:07:07.180 06:45:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:07:07.180 06:45:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:07.180 06:45:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:07.180 06:45:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:07.180 06:45:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:07.180 06:45:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:07.180 06:45:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:07.180 06:45:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:07.180 06:45:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:07.180 06:45:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:07.180 06:45:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:07.180 06:45:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:07:07.180 06:45:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:07:07.180 06:45:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:07.180 06:45:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:07.180 06:45:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:07.180 06:45:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:07.180 06:45:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:07.180 06:45:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:07:07.440 [2024-12-12 06:45:14.729002] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:07.440 [2024-12-12 06:45:14.729072] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1156749 ] 00:07:07.440 [2024-12-12 06:45:14.917996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.440 [2024-12-12 06:45:14.951249] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.702 [2024-12-12 06:45:15.010176] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.702 [2024-12-12 06:45:15.026496] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:07.702 INFO: Running with entropic power schedule (0xFF, 100). 00:07:07.702 INFO: Seed: 2130374280 00:07:07.702 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:07.702 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:07.702 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:07.702 INFO: A corpus is not provided, starting from an empty corpus 00:07:07.702 [2024-12-12 06:45:15.091870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.702 [2024-12-12 06:45:15.091899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.702 #2 INITED cov: 12115 ft: 12114 corp: 1/1b exec/s: 0 rss: 70Mb 00:07:07.702 [2024-12-12 06:45:15.131893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.702 [2024-12-12 06:45:15.131920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.702 #3 NEW cov: 12247 ft: 12657 corp: 2/2b lim: 5 exec/s: 0 rss: 71Mb L: 1/1 MS: 1 ChangeBit- 00:07:07.702 [2024-12-12 06:45:15.192091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.702 [2024-12-12 06:45:15.192116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.702 #4 NEW cov: 12253 ft: 12939 corp: 3/3b lim: 5 exec/s: 0 rss: 71Mb L: 1/1 MS: 1 ShuffleBytes- 00:07:07.963 [2024-12-12 06:45:15.232166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.963 [2024-12-12 06:45:15.232192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.963 #5 NEW cov: 12338 ft: 13194 corp: 4/4b lim: 5 exec/s: 0 rss: 71Mb L: 1/1 MS: 1 CrossOver- 00:07:07.963 [2024-12-12 06:45:15.272732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.963 [2024-12-12 06:45:15.272761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.963 [2024-12-12 06:45:15.272817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.963 [2024-12-12 06:45:15.272830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.963 [2024-12-12 06:45:15.272884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.963 [2024-12-12 06:45:15.272897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.963 [2024-12-12 06:45:15.272951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.963 [2024-12-12 06:45:15.272964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.963 #6 NEW cov: 12338 ft: 14108 corp: 5/8b lim: 5 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:07:07.963 [2024-12-12 06:45:15.332474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.963 [2024-12-12 06:45:15.332500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.963 #7 NEW cov: 12338 ft: 14207 corp: 6/9b lim: 5 exec/s: 0 rss: 71Mb L: 1/4 MS: 1 ChangeByte- 00:07:07.963 [2024-12-12 06:45:15.372571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.963 [2024-12-12 06:45:15.372596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.963 #8 NEW cov: 12338 ft: 14245 corp: 7/10b lim: 5 exec/s: 0 rss: 71Mb L: 1/4 MS: 1 ChangeBinInt- 00:07:07.963 [2024-12-12 06:45:15.432737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.963 [2024-12-12 06:45:15.432762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.963 #9 NEW cov: 12338 ft: 14273 corp: 8/11b lim: 5 exec/s: 0 rss: 71Mb L: 1/4 MS: 1 ShuffleBytes- 00:07:07.963 [2024-12-12 06:45:15.473326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.963 [2024-12-12 06:45:15.473351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.963 [2024-12-12 06:45:15.473408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.963 [2024-12-12 06:45:15.473421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.963 [2024-12-12 06:45:15.473492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.963 [2024-12-12 06:45:15.473505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.963 [2024-12-12 06:45:15.473560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:07.963 [2024-12-12 06:45:15.473573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.222 #10 NEW cov: 12338 ft: 14320 corp: 9/15b lim: 5 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:08.222 [2024-12-12 06:45:15.533021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.222 [2024-12-12 06:45:15.533047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.222 #11 NEW cov: 12338 ft: 14395 corp: 10/16b lim: 5 exec/s: 0 rss: 71Mb L: 1/4 MS: 1 ShuffleBytes- 00:07:08.222 [2024-12-12 06:45:15.573598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.222 [2024-12-12 06:45:15.573625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.222 [2024-12-12 06:45:15.573682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.222 [2024-12-12 06:45:15.573696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.222 [2024-12-12 06:45:15.573751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.222 [2024-12-12 06:45:15.573764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.222 [2024-12-12 06:45:15.573819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.222 [2024-12-12 06:45:15.573833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.222 #12 NEW cov: 12338 ft: 14433 corp: 11/20b lim: 5 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:08.222 [2024-12-12 06:45:15.633910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.222 [2024-12-12 06:45:15.633935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.222 [2024-12-12 06:45:15.634011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.222 [2024-12-12 06:45:15.634025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.222 [2024-12-12 06:45:15.634081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.222 [2024-12-12 06:45:15.634094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.222 [2024-12-12 06:45:15.634153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.222 [2024-12-12 06:45:15.634166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.222 [2024-12-12 06:45:15.634221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.222 [2024-12-12 06:45:15.634234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.222 #13 NEW cov: 12338 ft: 14490 corp: 12/25b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 InsertByte- 00:07:08.222 [2024-12-12 06:45:15.673391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.222 [2024-12-12 06:45:15.673419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.222 #14 NEW cov: 12338 ft: 14550 corp: 13/26b lim: 5 exec/s: 0 rss: 71Mb L: 1/5 MS: 1 ChangeBinInt- 00:07:08.222 [2024-12-12 06:45:15.733702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.222 [2024-12-12 06:45:15.733726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.223 [2024-12-12 06:45:15.733800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.223 [2024-12-12 06:45:15.733814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.482 #15 NEW cov: 12338 ft: 14756 corp: 14/28b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:07:08.482 [2024-12-12 06:45:15.793724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.482 [2024-12-12 06:45:15.793749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.482 #16 NEW cov: 12338 ft: 14769 corp: 15/29b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 CrossOver- 00:07:08.482 [2024-12-12 06:45:15.853877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.482 [2024-12-12 06:45:15.853903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.482 #17 NEW cov: 12338 ft: 14794 corp: 16/30b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:07:08.482 [2024-12-12 06:45:15.914348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.482 [2024-12-12 06:45:15.914373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.482 [2024-12-12 06:45:15.914458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.482 [2024-12-12 06:45:15.914472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.482 [2024-12-12 06:45:15.914526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.482 [2024-12-12 06:45:15.914539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.741 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:08.741 #18 NEW cov: 12361 ft: 15000 corp: 17/33b lim: 5 exec/s: 18 rss: 73Mb L: 3/5 MS: 1 EraseBytes- 00:07:08.741 [2024-12-12 06:45:16.235026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.741 [2024-12-12 06:45:16.235072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.741 #19 NEW cov: 12361 ft: 15104 corp: 18/34b lim: 5 exec/s: 19 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:07:09.001 [2024-12-12 06:45:16.274969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.001 [2024-12-12 06:45:16.274996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.001 #20 NEW cov: 12361 ft: 15117 corp: 19/35b lim: 5 exec/s: 20 rss: 73Mb L: 1/5 MS: 1 EraseBytes- 00:07:09.001 [2024-12-12 06:45:16.335766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.001 [2024-12-12 06:45:16.335793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.001 [2024-12-12 06:45:16.335867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.001 [2024-12-12 06:45:16.335881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.001 [2024-12-12 06:45:16.335939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.001 [2024-12-12 06:45:16.335953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.001 [2024-12-12 06:45:16.336010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.001 [2024-12-12 06:45:16.336024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.001 [2024-12-12 06:45:16.336081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.001 [2024-12-12 06:45:16.336094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:09.001 #21 NEW cov: 12361 ft: 15124 corp: 20/40b lim: 5 exec/s: 21 rss: 73Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:09.001 [2024-12-12 06:45:16.375427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.001 [2024-12-12 06:45:16.375452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.001 [2024-12-12 06:45:16.375511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.001 [2024-12-12 06:45:16.375524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.001 #22 NEW cov: 12361 ft: 15144 corp: 21/42b lim: 5 exec/s: 22 rss: 73Mb L: 2/5 MS: 1 InsertByte- 00:07:09.001 [2024-12-12 06:45:16.415530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.001 [2024-12-12 06:45:16.415556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.001 [2024-12-12 06:45:16.415613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.001 [2024-12-12 06:45:16.415627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.001 #23 NEW cov: 12361 ft: 15149 corp: 22/44b lim: 5 exec/s: 23 rss: 73Mb L: 2/5 MS: 1 InsertByte- 00:07:09.001 [2024-12-12 06:45:16.475530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.001 [2024-12-12 06:45:16.475555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.001 #24 NEW cov: 12361 ft: 15160 corp: 23/45b lim: 5 exec/s: 24 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:07:09.001 [2024-12-12 06:45:16.515673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.001 [2024-12-12 06:45:16.515701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.261 #25 NEW cov: 12361 ft: 15235 corp: 24/46b lim: 5 exec/s: 25 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:07:09.261 [2024-12-12 06:45:16.575980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.261 [2024-12-12 06:45:16.576006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.261 [2024-12-12 06:45:16.576065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.261 [2024-12-12 06:45:16.576078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.261 #26 NEW cov: 12361 ft: 15262 corp: 25/48b lim: 5 exec/s: 26 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:07:09.261 [2024-12-12 06:45:16.636589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.261 [2024-12-12 06:45:16.636615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.261 [2024-12-12 06:45:16.636673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.261 [2024-12-12 06:45:16.636686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.261 [2024-12-12 06:45:16.636741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.261 [2024-12-12 06:45:16.636754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.261 [2024-12-12 06:45:16.636809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.261 [2024-12-12 06:45:16.636822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.261 [2024-12-12 06:45:16.636878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.261 [2024-12-12 06:45:16.636891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:09.261 #27 NEW cov: 12361 ft: 15270 corp: 26/53b lim: 5 exec/s: 27 rss: 74Mb L: 5/5 MS: 1 CopyPart- 00:07:09.261 [2024-12-12 06:45:16.676091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.261 [2024-12-12 06:45:16.676116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.261 #28 NEW cov: 12361 ft: 15279 corp: 27/54b lim: 5 exec/s: 28 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:07:09.261 [2024-12-12 06:45:16.716389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.261 [2024-12-12 06:45:16.716414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.261 [2024-12-12 06:45:16.716473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.261 [2024-12-12 06:45:16.716492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.261 #29 NEW cov: 12361 ft: 15333 corp: 28/56b lim: 5 exec/s: 29 rss: 74Mb L: 2/5 MS: 1 CopyPart- 00:07:09.261 [2024-12-12 06:45:16.756328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.261 [2024-12-12 06:45:16.756354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.261 #30 NEW cov: 12361 ft: 15344 corp: 29/57b lim: 5 exec/s: 30 rss: 74Mb L: 1/5 MS: 1 ChangeBinInt- 00:07:09.521 [2024-12-12 06:45:16.797071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.521 [2024-12-12 06:45:16.797097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.521 [2024-12-12 06:45:16.797158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.521 [2024-12-12 06:45:16.797173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.521 [2024-12-12 06:45:16.797229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.521 [2024-12-12 06:45:16.797260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.521 [2024-12-12 06:45:16.797317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.521 [2024-12-12 06:45:16.797330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.521 [2024-12-12 06:45:16.797388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.521 [2024-12-12 06:45:16.797401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:09.521 #31 NEW cov: 12361 ft: 15350 corp: 30/62b lim: 5 exec/s: 31 rss: 74Mb L: 5/5 MS: 1 CrossOver- 00:07:09.521 [2024-12-12 06:45:16.836808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.521 [2024-12-12 06:45:16.836833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.521 [2024-12-12 06:45:16.836891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.521 [2024-12-12 06:45:16.836905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.521 #32 NEW cov: 12361 ft: 15371 corp: 31/64b lim: 5 exec/s: 32 rss: 74Mb L: 2/5 MS: 1 CopyPart- 00:07:09.521 [2024-12-12 06:45:16.896904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.521 [2024-12-12 06:45:16.896929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.521 [2024-12-12 06:45:16.896986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.521 [2024-12-12 06:45:16.896999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.521 #33 NEW cov: 12361 ft: 15406 corp: 32/66b lim: 5 exec/s: 33 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:07:09.521 [2024-12-12 06:45:16.956901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.521 [2024-12-12 06:45:16.956926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.521 #34 NEW cov: 12361 ft: 15444 corp: 33/67b lim: 5 exec/s: 34 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:09.521 [2024-12-12 06:45:16.997625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.521 [2024-12-12 06:45:16.997650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.521 [2024-12-12 06:45:16.997708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.521 [2024-12-12 06:45:16.997722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.521 [2024-12-12 06:45:16.997778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.521 [2024-12-12 06:45:16.997791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.521 [2024-12-12 06:45:16.997848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.521 [2024-12-12 06:45:16.997861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.521 [2024-12-12 06:45:16.997916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.521 [2024-12-12 06:45:16.997929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:09.521 #35 NEW cov: 12361 ft: 15456 corp: 34/72b lim: 5 exec/s: 35 rss: 74Mb L: 5/5 MS: 1 CrossOver- 00:07:09.522 [2024-12-12 06:45:17.037077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.522 [2024-12-12 06:45:17.037103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.781 #36 NEW cov: 12361 ft: 15477 corp: 35/73b lim: 5 exec/s: 36 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:07:09.781 [2024-12-12 06:45:17.077838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.781 [2024-12-12 06:45:17.077864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.781 [2024-12-12 06:45:17.077922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.781 [2024-12-12 06:45:17.077936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.781 [2024-12-12 06:45:17.078006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.781 [2024-12-12 06:45:17.078020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.781 [2024-12-12 06:45:17.078076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.781 [2024-12-12 06:45:17.078088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.781 [2024-12-12 06:45:17.078152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.781 [2024-12-12 06:45:17.078165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:09.781 #37 NEW cov: 12361 ft: 15514 corp: 36/78b lim: 5 exec/s: 18 rss: 74Mb L: 5/5 MS: 1 CrossOver- 00:07:09.781 #37 DONE cov: 12361 ft: 15514 corp: 36/78b lim: 5 exec/s: 18 rss: 74Mb 00:07:09.781 Done 37 runs in 2 second(s) 00:07:09.781 06:45:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:07:09.781 06:45:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:09.781 06:45:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:09.781 06:45:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:07:09.781 06:45:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:07:09.781 06:45:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:09.781 06:45:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:09.781 06:45:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:09.781 06:45:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:07:09.781 06:45:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:09.782 06:45:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:09.782 06:45:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:07:09.782 06:45:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:07:09.782 06:45:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:09.782 06:45:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:07:09.782 06:45:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:09.782 06:45:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:09.782 06:45:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:09.782 06:45:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:07:09.782 [2024-12-12 06:45:17.268804] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:09.782 [2024-12-12 06:45:17.268872] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1157133 ] 00:07:10.041 [2024-12-12 06:45:17.457981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.041 [2024-12-12 06:45:17.490846] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.041 [2024-12-12 06:45:17.549772] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.300 [2024-12-12 06:45:17.566096] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:07:10.300 INFO: Running with entropic power schedule (0xFF, 100). 00:07:10.300 INFO: Seed: 373396786 00:07:10.300 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:10.300 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:10.300 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:10.300 INFO: A corpus is not provided, starting from an empty corpus 00:07:10.300 [2024-12-12 06:45:17.632170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.300 [2024-12-12 06:45:17.632212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.300 #2 INITED cov: 12135 ft: 12110 corp: 1/1b exec/s: 0 rss: 70Mb 00:07:10.300 [2024-12-12 06:45:17.673154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.300 [2024-12-12 06:45:17.673187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.300 [2024-12-12 06:45:17.673317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.300 [2024-12-12 06:45:17.673335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.300 [2024-12-12 06:45:17.673451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.300 [2024-12-12 06:45:17.673469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.300 [2024-12-12 06:45:17.673590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.300 [2024-12-12 06:45:17.673609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.300 [2024-12-12 06:45:17.673728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.300 [2024-12-12 06:45:17.673745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:10.300 #3 NEW cov: 12248 ft: 13618 corp: 2/6b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:10.300 [2024-12-12 06:45:17.732593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.300 [2024-12-12 06:45:17.732624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.300 [2024-12-12 06:45:17.732763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.300 [2024-12-12 06:45:17.732780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.300 #4 NEW cov: 12254 ft: 13950 corp: 3/8b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 InsertByte- 00:07:10.300 [2024-12-12 06:45:17.772708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.300 [2024-12-12 06:45:17.772738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.300 [2024-12-12 06:45:17.772867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.300 [2024-12-12 06:45:17.772886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.300 #5 NEW cov: 12339 ft: 14209 corp: 4/10b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 CrossOver- 00:07:10.560 [2024-12-12 06:45:17.832908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.560 [2024-12-12 06:45:17.832942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.560 [2024-12-12 06:45:17.833063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.560 [2024-12-12 06:45:17.833080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.560 #6 NEW cov: 12339 ft: 14287 corp: 5/12b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 ChangeByte- 00:07:10.560 [2024-12-12 06:45:17.893138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.560 [2024-12-12 06:45:17.893172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.560 [2024-12-12 06:45:17.893294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.560 [2024-12-12 06:45:17.893310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.560 #7 NEW cov: 12339 ft: 14417 corp: 6/14b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 InsertByte- 00:07:10.560 [2024-12-12 06:45:17.933126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.560 [2024-12-12 06:45:17.933163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.560 [2024-12-12 06:45:17.933285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.560 [2024-12-12 06:45:17.933302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.560 #8 NEW cov: 12339 ft: 14461 corp: 7/16b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 ChangeBit- 00:07:10.560 [2024-12-12 06:45:17.993387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.560 [2024-12-12 06:45:17.993416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.560 [2024-12-12 06:45:17.993545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.560 [2024-12-12 06:45:17.993562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.560 #9 NEW cov: 12339 ft: 14506 corp: 8/18b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 ChangeByte- 00:07:10.560 [2024-12-12 06:45:18.053478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.560 [2024-12-12 06:45:18.053506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.560 [2024-12-12 06:45:18.053639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.560 [2024-12-12 06:45:18.053656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.820 #10 NEW cov: 12339 ft: 14524 corp: 9/20b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 ChangeBinInt- 00:07:10.820 [2024-12-12 06:45:18.113674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.820 [2024-12-12 06:45:18.113704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.820 [2024-12-12 06:45:18.113841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.820 [2024-12-12 06:45:18.113859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.820 #11 NEW cov: 12339 ft: 14576 corp: 10/22b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 CopyPart- 00:07:10.820 [2024-12-12 06:45:18.153785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.820 [2024-12-12 06:45:18.153814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.820 [2024-12-12 06:45:18.153945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.820 [2024-12-12 06:45:18.153961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.820 #12 NEW cov: 12339 ft: 14652 corp: 11/24b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:10.820 [2024-12-12 06:45:18.213939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.820 [2024-12-12 06:45:18.213968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.820 [2024-12-12 06:45:18.214102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.820 [2024-12-12 06:45:18.214118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.820 #13 NEW cov: 12339 ft: 14673 corp: 12/26b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 CrossOver- 00:07:10.820 [2024-12-12 06:45:18.254041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.820 [2024-12-12 06:45:18.254071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.820 [2024-12-12 06:45:18.254208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.820 [2024-12-12 06:45:18.254225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.820 #14 NEW cov: 12339 ft: 14686 corp: 13/28b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:10.820 [2024-12-12 06:45:18.294404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.820 [2024-12-12 06:45:18.294433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.820 [2024-12-12 06:45:18.294569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.820 [2024-12-12 06:45:18.294586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.820 [2024-12-12 06:45:18.294706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.820 [2024-12-12 06:45:18.294722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.820 #15 NEW cov: 12339 ft: 14919 corp: 14/31b lim: 5 exec/s: 0 rss: 71Mb L: 3/5 MS: 1 CrossOver- 00:07:11.079 [2024-12-12 06:45:18.354657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.079 [2024-12-12 06:45:18.354687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.079 [2024-12-12 06:45:18.354808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.079 [2024-12-12 06:45:18.354825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.079 [2024-12-12 06:45:18.354950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.079 [2024-12-12 06:45:18.354969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.079 #16 NEW cov: 12339 ft: 14951 corp: 15/34b lim: 5 exec/s: 0 rss: 71Mb L: 3/5 MS: 1 ChangeByte- 00:07:11.079 [2024-12-12 06:45:18.415017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.079 [2024-12-12 06:45:18.415046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.079 [2024-12-12 06:45:18.415180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.079 [2024-12-12 06:45:18.415199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.079 [2024-12-12 06:45:18.415318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.079 [2024-12-12 06:45:18.415335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.079 [2024-12-12 06:45:18.415457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.079 [2024-12-12 06:45:18.415473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.079 #17 NEW cov: 12339 ft: 15014 corp: 16/38b lim: 5 exec/s: 0 rss: 71Mb L: 4/5 MS: 1 CopyPart- 00:07:11.079 [2024-12-12 06:45:18.455371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.079 [2024-12-12 06:45:18.455399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.079 [2024-12-12 06:45:18.455533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.079 [2024-12-12 06:45:18.455551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.079 [2024-12-12 06:45:18.455668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.079 [2024-12-12 06:45:18.455684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.079 [2024-12-12 06:45:18.455801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.079 [2024-12-12 06:45:18.455818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.079 [2024-12-12 06:45:18.455934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.079 [2024-12-12 06:45:18.455947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:11.338 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:11.338 #18 NEW cov: 12362 ft: 15052 corp: 17/43b lim: 5 exec/s: 18 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:07:11.338 [2024-12-12 06:45:18.795827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.338 [2024-12-12 06:45:18.795868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.338 [2024-12-12 06:45:18.795995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.338 [2024-12-12 06:45:18.796014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.338 #19 NEW cov: 12362 ft: 15069 corp: 18/45b lim: 5 exec/s: 19 rss: 73Mb L: 2/5 MS: 1 ChangeByte- 00:07:11.338 [2024-12-12 06:45:18.855790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.338 [2024-12-12 06:45:18.855821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.338 [2024-12-12 06:45:18.855945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.338 [2024-12-12 06:45:18.855962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.598 #20 NEW cov: 12362 ft: 15089 corp: 19/47b lim: 5 exec/s: 20 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:11.598 [2024-12-12 06:45:18.896154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.598 [2024-12-12 06:45:18.896184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.598 [2024-12-12 06:45:18.896309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.598 [2024-12-12 06:45:18.896325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.598 [2024-12-12 06:45:18.896442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.598 [2024-12-12 06:45:18.896458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.598 #21 NEW cov: 12362 ft: 15120 corp: 20/50b lim: 5 exec/s: 21 rss: 73Mb L: 3/5 MS: 1 CrossOver- 00:07:11.598 [2024-12-12 06:45:18.936717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.598 [2024-12-12 06:45:18.936746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.598 [2024-12-12 06:45:18.936868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.598 [2024-12-12 06:45:18.936886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.598 [2024-12-12 06:45:18.937012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.598 [2024-12-12 06:45:18.937035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.598 [2024-12-12 06:45:18.937166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.598 [2024-12-12 06:45:18.937195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.598 [2024-12-12 06:45:18.937320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.598 [2024-12-12 06:45:18.937335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:11.598 #22 NEW cov: 12362 ft: 15164 corp: 21/55b lim: 5 exec/s: 22 rss: 73Mb L: 5/5 MS: 1 ChangeByte- 00:07:11.598 [2024-12-12 06:45:18.986195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.598 [2024-12-12 06:45:18.986224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.598 [2024-12-12 06:45:18.986349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.598 [2024-12-12 06:45:18.986365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.598 #23 NEW cov: 12362 ft: 15246 corp: 22/57b lim: 5 exec/s: 23 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:11.598 [2024-12-12 06:45:19.036337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.598 [2024-12-12 06:45:19.036367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.598 [2024-12-12 06:45:19.036490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.598 [2024-12-12 06:45:19.036507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.598 #24 NEW cov: 12362 ft: 15249 corp: 23/59b lim: 5 exec/s: 24 rss: 73Mb L: 2/5 MS: 1 ChangeBit- 00:07:11.598 [2024-12-12 06:45:19.106569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.598 [2024-12-12 06:45:19.106597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.598 [2024-12-12 06:45:19.106724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.598 [2024-12-12 06:45:19.106740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.858 #25 NEW cov: 12362 ft: 15285 corp: 24/61b lim: 5 exec/s: 25 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:11.858 [2024-12-12 06:45:19.177615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.858 [2024-12-12 06:45:19.177644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.858 [2024-12-12 06:45:19.177770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.858 [2024-12-12 06:45:19.177788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.858 [2024-12-12 06:45:19.177919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.858 [2024-12-12 06:45:19.177936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.858 [2024-12-12 06:45:19.178065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.858 [2024-12-12 06:45:19.178082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.858 [2024-12-12 06:45:19.178215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.858 [2024-12-12 06:45:19.178232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:11.858 #26 NEW cov: 12362 ft: 15293 corp: 25/66b lim: 5 exec/s: 26 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:07:11.858 [2024-12-12 06:45:19.226923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.858 [2024-12-12 06:45:19.226954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.858 [2024-12-12 06:45:19.227074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.858 [2024-12-12 06:45:19.227093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.858 #27 NEW cov: 12362 ft: 15320 corp: 26/68b lim: 5 exec/s: 27 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:11.858 [2024-12-12 06:45:19.267186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.858 [2024-12-12 06:45:19.267216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.858 [2024-12-12 06:45:19.267349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.858 [2024-12-12 06:45:19.267367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.858 [2024-12-12 06:45:19.267487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.858 [2024-12-12 06:45:19.267504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.858 #28 NEW cov: 12362 ft: 15326 corp: 27/71b lim: 5 exec/s: 28 rss: 73Mb L: 3/5 MS: 1 CopyPart- 00:07:11.858 [2024-12-12 06:45:19.317655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.858 [2024-12-12 06:45:19.317684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.858 [2024-12-12 06:45:19.317803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.858 [2024-12-12 06:45:19.317822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.858 [2024-12-12 06:45:19.317932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.858 [2024-12-12 06:45:19.317953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.858 [2024-12-12 06:45:19.318074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.858 [2024-12-12 06:45:19.318091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.858 #29 NEW cov: 12362 ft: 15338 corp: 28/75b lim: 5 exec/s: 29 rss: 73Mb L: 4/5 MS: 1 InsertByte- 00:07:12.118 [2024-12-12 06:45:19.387460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.118 [2024-12-12 06:45:19.387491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.118 [2024-12-12 06:45:19.387611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.118 [2024-12-12 06:45:19.387629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.118 #30 NEW cov: 12362 ft: 15350 corp: 29/77b lim: 5 exec/s: 30 rss: 73Mb L: 2/5 MS: 1 ChangeByte- 00:07:12.118 [2024-12-12 06:45:19.438373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.118 [2024-12-12 06:45:19.438402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.118 [2024-12-12 06:45:19.438520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.118 [2024-12-12 06:45:19.438538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.118 [2024-12-12 06:45:19.438656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.118 [2024-12-12 06:45:19.438673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.118 [2024-12-12 06:45:19.438797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.118 [2024-12-12 06:45:19.438812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.118 [2024-12-12 06:45:19.438937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.118 [2024-12-12 06:45:19.438955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:12.118 #31 NEW cov: 12362 ft: 15386 corp: 30/82b lim: 5 exec/s: 31 rss: 73Mb L: 5/5 MS: 1 ChangeBit- 00:07:12.118 [2024-12-12 06:45:19.498082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.118 [2024-12-12 06:45:19.498111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.118 [2024-12-12 06:45:19.498224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.118 [2024-12-12 06:45:19.498241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.118 [2024-12-12 06:45:19.498363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.118 [2024-12-12 06:45:19.498384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.118 #32 NEW cov: 12362 ft: 15423 corp: 31/85b lim: 5 exec/s: 32 rss: 73Mb L: 3/5 MS: 1 CrossOver- 00:07:12.118 [2024-12-12 06:45:19.568236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.118 [2024-12-12 06:45:19.568266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.118 [2024-12-12 06:45:19.568388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.118 [2024-12-12 06:45:19.568406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.118 [2024-12-12 06:45:19.568527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.118 [2024-12-12 06:45:19.568543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.118 #33 NEW cov: 12362 ft: 15432 corp: 32/88b lim: 5 exec/s: 33 rss: 73Mb L: 3/5 MS: 1 InsertByte- 00:07:12.118 [2024-12-12 06:45:19.618706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.118 [2024-12-12 06:45:19.618736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.118 [2024-12-12 06:45:19.618862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.118 [2024-12-12 06:45:19.618879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.118 [2024-12-12 06:45:19.619007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.118 [2024-12-12 06:45:19.619024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.118 [2024-12-12 06:45:19.619141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.118 [2024-12-12 06:45:19.619162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.378 #34 NEW cov: 12362 ft: 15436 corp: 33/92b lim: 5 exec/s: 17 rss: 73Mb L: 4/5 MS: 1 CrossOver- 00:07:12.378 #34 DONE cov: 12362 ft: 15436 corp: 33/92b lim: 5 exec/s: 17 rss: 73Mb 00:07:12.378 Done 34 runs in 2 second(s) 00:07:12.378 06:45:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:07:12.378 06:45:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:12.378 06:45:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:12.378 06:45:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:07:12.378 06:45:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:07:12.378 06:45:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:12.378 06:45:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:12.378 06:45:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:12.378 06:45:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:07:12.378 06:45:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:12.378 06:45:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:12.378 06:45:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:07:12.378 06:45:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:07:12.378 06:45:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:12.378 06:45:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:07:12.378 06:45:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:12.378 06:45:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:12.378 06:45:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:12.378 06:45:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:07:12.378 [2024-12-12 06:45:19.810010] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:12.378 [2024-12-12 06:45:19.810079] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1157673 ] 00:07:12.637 [2024-12-12 06:45:19.998449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.637 [2024-12-12 06:45:20.036200] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.637 [2024-12-12 06:45:20.095476] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.637 [2024-12-12 06:45:20.111758] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:07:12.637 INFO: Running with entropic power schedule (0xFF, 100). 00:07:12.637 INFO: Seed: 2920434516 00:07:12.637 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:12.637 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:12.637 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:12.637 INFO: A corpus is not provided, starting from an empty corpus 00:07:12.637 #2 INITED exec/s: 0 rss: 66Mb 00:07:12.637 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:12.637 This may also happen if the target rejected all inputs we tried so far 00:07:12.896 [2024-12-12 06:45:20.177415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a28ffff cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.896 [2024-12-12 06:45:20.177444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.896 [2024-12-12 06:45:20.177502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.896 [2024-12-12 06:45:20.177515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.896 [2024-12-12 06:45:20.177572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8ff5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.896 [2024-12-12 06:45:20.177586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.156 NEW_FUNC[1/716]: 0x448aa8 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:07:13.156 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:13.156 #30 NEW cov: 12140 ft: 12141 corp: 2/25b lim: 40 exec/s: 0 rss: 72Mb L: 24/24 MS: 3 CMP-InsertByte-InsertRepeatedBytes- DE: "\377\377\377["- 00:07:13.156 [2024-12-12 06:45:20.518561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a400a28 cdw11:ffffd8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.156 [2024-12-12 06:45:20.518617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.156 [2024-12-12 06:45:20.518707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.156 [2024-12-12 06:45:20.518734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.156 [2024-12-12 06:45:20.518821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.156 [2024-12-12 06:45:20.518847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.156 #38 NEW cov: 12270 ft: 12932 corp: 3/51b lim: 40 exec/s: 0 rss: 72Mb L: 26/26 MS: 3 ShuffleBytes-InsertByte-CrossOver- 00:07:13.156 [2024-12-12 06:45:20.568474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a400a28 cdw11:d8ffffd8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.156 [2024-12-12 06:45:20.568501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.156 [2024-12-12 06:45:20.568579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.156 [2024-12-12 06:45:20.568593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.156 [2024-12-12 06:45:20.568655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.156 [2024-12-12 06:45:20.568668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.156 #39 NEW cov: 12276 ft: 13171 corp: 4/77b lim: 40 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 ShuffleBytes- 00:07:13.156 [2024-12-12 06:45:20.628353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a28d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.156 [2024-12-12 06:45:20.628380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.156 #40 NEW cov: 12361 ft: 13648 corp: 5/89b lim: 40 exec/s: 0 rss: 72Mb L: 12/26 MS: 1 EraseBytes- 00:07:13.415 [2024-12-12 06:45:20.688766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a400a28 cdw11:ffffd8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.415 [2024-12-12 06:45:20.688794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.415 [2024-12-12 06:45:20.688873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.415 [2024-12-12 06:45:20.688887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.415 [2024-12-12 06:45:20.688949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.415 [2024-12-12 06:45:20.688963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.415 #41 NEW cov: 12361 ft: 13782 corp: 6/120b lim: 40 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:07:13.416 [2024-12-12 06:45:20.729079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a28d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.416 [2024-12-12 06:45:20.729107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.416 [2024-12-12 06:45:20.729178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.416 [2024-12-12 06:45:20.729193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.416 [2024-12-12 06:45:20.729255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.416 [2024-12-12 06:45:20.729268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.416 [2024-12-12 06:45:20.729330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.416 [2024-12-12 06:45:20.729344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.416 [2024-12-12 06:45:20.729406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:bdbdbdbd cdw11:d8d8ff5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.416 [2024-12-12 06:45:20.729420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:13.416 #42 NEW cov: 12361 ft: 14359 corp: 7/160b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:13.416 [2024-12-12 06:45:20.789286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a28d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.416 [2024-12-12 06:45:20.789313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.416 [2024-12-12 06:45:20.789377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.416 [2024-12-12 06:45:20.789391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.416 [2024-12-12 06:45:20.789451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.416 [2024-12-12 06:45:20.789465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.416 [2024-12-12 06:45:20.789524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:d8d8d8bd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.416 [2024-12-12 06:45:20.789537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.416 [2024-12-12 06:45:20.789594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:bdbdbdbd cdw11:d8d8ff5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.416 [2024-12-12 06:45:20.789607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:13.416 #43 NEW cov: 12361 ft: 14514 corp: 8/200b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 CopyPart- 00:07:13.416 [2024-12-12 06:45:20.849145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a400a28 cdw11:d8ffffd8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.416 [2024-12-12 06:45:20.849176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.416 [2024-12-12 06:45:20.849239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.416 [2024-12-12 06:45:20.849255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.416 [2024-12-12 06:45:20.849314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff5b cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.416 [2024-12-12 06:45:20.849327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.416 #44 NEW cov: 12361 ft: 14549 corp: 9/230b lim: 40 exec/s: 0 rss: 73Mb L: 30/40 MS: 1 PersAutoDict- DE: "\377\377\377["- 00:07:13.416 [2024-12-12 06:45:20.909453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a28ff cdw11:ffd8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.416 [2024-12-12 06:45:20.909479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.416 [2024-12-12 06:45:20.909544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.416 [2024-12-12 06:45:20.909558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.416 [2024-12-12 06:45:20.909621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.416 [2024-12-12 06:45:20.909635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.416 [2024-12-12 06:45:20.909697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:28d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.416 [2024-12-12 06:45:20.909710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.416 #45 NEW cov: 12361 ft: 14568 corp: 10/265b lim: 40 exec/s: 0 rss: 73Mb L: 35/40 MS: 1 CrossOver- 00:07:13.676 [2024-12-12 06:45:20.949428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a400a28 cdw11:ffffd8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.676 [2024-12-12 06:45:20.949454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.676 [2024-12-12 06:45:20.949518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.676 [2024-12-12 06:45:20.949532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.676 [2024-12-12 06:45:20.949592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.676 [2024-12-12 06:45:20.949605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.676 #51 NEW cov: 12361 ft: 14636 corp: 11/296b lim: 40 exec/s: 0 rss: 73Mb L: 31/40 MS: 1 ChangeBinInt- 00:07:13.676 [2024-12-12 06:45:21.009613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a28ffff cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.676 [2024-12-12 06:45:21.009639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.676 [2024-12-12 06:45:21.009701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.676 [2024-12-12 06:45:21.009714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.676 [2024-12-12 06:45:21.009777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.676 [2024-12-12 06:45:21.009790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.676 #52 NEW cov: 12361 ft: 14663 corp: 12/325b lim: 40 exec/s: 0 rss: 73Mb L: 29/40 MS: 1 CopyPart- 00:07:13.676 [2024-12-12 06:45:21.049677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a400a28 cdw11:ffffd8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.676 [2024-12-12 06:45:21.049703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.676 [2024-12-12 06:45:21.049762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.676 [2024-12-12 06:45:21.049776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.676 [2024-12-12 06:45:21.049853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00d8d8d8 cdw11:d830d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.676 [2024-12-12 06:45:21.049868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.676 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:13.676 #53 NEW cov: 12384 ft: 14745 corp: 13/356b lim: 40 exec/s: 0 rss: 73Mb L: 31/40 MS: 1 ChangeByte- 00:07:13.676 [2024-12-12 06:45:21.089920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a28d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.676 [2024-12-12 06:45:21.089945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.676 [2024-12-12 06:45:21.090005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.676 [2024-12-12 06:45:21.090018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.676 [2024-12-12 06:45:21.090079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.676 [2024-12-12 06:45:21.090092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.676 [2024-12-12 06:45:21.090154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.676 [2024-12-12 06:45:21.090169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.676 #54 NEW cov: 12384 ft: 14815 corp: 14/393b lim: 40 exec/s: 0 rss: 73Mb L: 37/40 MS: 1 EraseBytes- 00:07:13.676 [2024-12-12 06:45:21.130161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a400a28 cdw11:ffffd8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.677 [2024-12-12 06:45:21.130187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.677 [2024-12-12 06:45:21.130250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.677 [2024-12-12 06:45:21.130263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.677 [2024-12-12 06:45:21.130323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.677 [2024-12-12 06:45:21.130339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.677 [2024-12-12 06:45:21.130396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:d8d8dad8 cdw11:d8ff5bd8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.677 [2024-12-12 06:45:21.130409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.677 [2024-12-12 06:45:21.130470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:d8d8ffff cdw11:ff5bd8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.677 [2024-12-12 06:45:21.130483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:13.677 #55 NEW cov: 12384 ft: 14833 corp: 15/433b lim: 40 exec/s: 55 rss: 73Mb L: 40/40 MS: 1 CrossOver- 00:07:13.677 [2024-12-12 06:45:21.190234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a400a28 cdw11:ffffd8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.677 [2024-12-12 06:45:21.190260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.677 [2024-12-12 06:45:21.190323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.677 [2024-12-12 06:45:21.190337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.677 [2024-12-12 06:45:21.190401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.677 [2024-12-12 06:45:21.190414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.677 [2024-12-12 06:45:21.190478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:d8d8d8d8 cdw11:332969bb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.677 [2024-12-12 06:45:21.190491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.936 #56 NEW cov: 12384 ft: 14838 corp: 16/472b lim: 40 exec/s: 56 rss: 73Mb L: 39/40 MS: 1 CMP- DE: "3)i\273;!\002\000"- 00:07:13.937 [2024-12-12 06:45:21.230200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a400a28 cdw11:d8ffffd8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.937 [2024-12-12 06:45:21.230226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.937 [2024-12-12 06:45:21.230285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d87a cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.937 [2024-12-12 06:45:21.230299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.937 [2024-12-12 06:45:21.230358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff5b cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.937 [2024-12-12 06:45:21.230371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.937 #57 NEW cov: 12384 ft: 14886 corp: 17/502b lim: 40 exec/s: 57 rss: 73Mb L: 30/40 MS: 1 ChangeByte- 00:07:13.937 [2024-12-12 06:45:21.290399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a28d8d8 cdw11:d8d8d80a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.937 [2024-12-12 06:45:21.290424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.937 [2024-12-12 06:45:21.290487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:400a28ff cdw11:ffd8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.937 [2024-12-12 06:45:21.290504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.937 [2024-12-12 06:45:21.290564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d8d8d800 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.937 [2024-12-12 06:45:21.290577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.937 #58 NEW cov: 12384 ft: 14922 corp: 18/527b lim: 40 exec/s: 58 rss: 73Mb L: 25/40 MS: 1 CrossOver- 00:07:13.937 [2024-12-12 06:45:21.330470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a400a28 cdw11:d8ffffd8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.937 [2024-12-12 06:45:21.330496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.937 [2024-12-12 06:45:21.330558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.937 [2024-12-12 06:45:21.330571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.937 [2024-12-12 06:45:21.330646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff5b cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.937 [2024-12-12 06:45:21.330659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.937 #59 NEW cov: 12384 ft: 14943 corp: 19/557b lim: 40 exec/s: 59 rss: 73Mb L: 30/40 MS: 1 ShuffleBytes- 00:07:13.937 [2024-12-12 06:45:21.370591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a400a28 cdw11:ffd8ffd8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.937 [2024-12-12 06:45:21.370616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.937 [2024-12-12 06:45:21.370681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.937 [2024-12-12 06:45:21.370694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.937 [2024-12-12 06:45:21.370757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.937 [2024-12-12 06:45:21.370770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.937 #60 NEW cov: 12384 ft: 14960 corp: 20/588b lim: 40 exec/s: 60 rss: 73Mb L: 31/40 MS: 1 ShuffleBytes- 00:07:13.937 [2024-12-12 06:45:21.410709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a28ffff cdw11:d84ad8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.937 [2024-12-12 06:45:21.410734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.937 [2024-12-12 06:45:21.410793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.937 [2024-12-12 06:45:21.410807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.937 [2024-12-12 06:45:21.410865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.937 [2024-12-12 06:45:21.410878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.937 #61 NEW cov: 12384 ft: 14978 corp: 21/618b lim: 40 exec/s: 61 rss: 73Mb L: 30/40 MS: 1 InsertByte- 00:07:14.197 [2024-12-12 06:45:21.471015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a28d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.471041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.197 [2024-12-12 06:45:21.471104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.471117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.197 [2024-12-12 06:45:21.471174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.471187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.197 [2024-12-12 06:45:21.471247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:bdbdbdbd cdw11:d8d8ff5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.471260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.197 #62 NEW cov: 12384 ft: 15038 corp: 22/650b lim: 40 exec/s: 62 rss: 73Mb L: 32/40 MS: 1 EraseBytes- 00:07:14.197 [2024-12-12 06:45:21.511116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a400a28 cdw11:d8ffffd8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.511141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.197 [2024-12-12 06:45:21.511210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d87a cdw11:d8ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.511225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.197 [2024-12-12 06:45:21.511288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffd8d8d8 cdw11:ffffff5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.511301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.197 [2024-12-12 06:45:21.511361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.511374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.197 #63 NEW cov: 12384 ft: 15056 corp: 23/684b lim: 40 exec/s: 63 rss: 73Mb L: 34/40 MS: 1 InsertRepeatedBytes- 00:07:14.197 [2024-12-12 06:45:21.571039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a400a28 cdw11:ffffd8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.571064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.197 [2024-12-12 06:45:21.571124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.571137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.197 #64 NEW cov: 12384 ft: 15287 corp: 24/706b lim: 40 exec/s: 64 rss: 73Mb L: 22/40 MS: 1 EraseBytes- 00:07:14.197 [2024-12-12 06:45:21.611302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a400a28 cdw11:ffffd8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.611331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.197 [2024-12-12 06:45:21.611392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.611405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.197 [2024-12-12 06:45:21.611480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ff5bd8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.611494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.197 #65 NEW cov: 12384 ft: 15308 corp: 25/737b lim: 40 exec/s: 65 rss: 74Mb L: 31/40 MS: 1 PersAutoDict- DE: "\377\377\377["- 00:07:14.197 [2024-12-12 06:45:21.651553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a28d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.651578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.197 [2024-12-12 06:45:21.651641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.651654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.197 [2024-12-12 06:45:21.651715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.651728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.197 [2024-12-12 06:45:21.651787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdd8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.651801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.197 #66 NEW cov: 12384 ft: 15331 corp: 26/772b lim: 40 exec/s: 66 rss: 74Mb L: 35/40 MS: 1 CopyPart- 00:07:14.197 [2024-12-12 06:45:21.711732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a400a28 cdw11:ffffd8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.711758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.197 [2024-12-12 06:45:21.711820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.711833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.197 [2024-12-12 06:45:21.711894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.711907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.197 [2024-12-12 06:45:21.711969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:d8d8d8d8 cdw11:332969bb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.197 [2024-12-12 06:45:21.711983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.457 #67 NEW cov: 12384 ft: 15352 corp: 27/811b lim: 40 exec/s: 67 rss: 74Mb L: 39/40 MS: 1 ShuffleBytes- 00:07:14.457 [2024-12-12 06:45:21.771763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a28ffff cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.457 [2024-12-12 06:45:21.771792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.457 [2024-12-12 06:45:21.771869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.457 [2024-12-12 06:45:21.771883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.457 [2024-12-12 06:45:21.771943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.457 [2024-12-12 06:45:21.771957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.457 #68 NEW cov: 12384 ft: 15401 corp: 28/840b lim: 40 exec/s: 68 rss: 74Mb L: 29/40 MS: 1 CopyPart- 00:07:14.457 [2024-12-12 06:45:21.811979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a28d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.457 [2024-12-12 06:45:21.812005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.457 [2024-12-12 06:45:21.812087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdc9c9c9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.457 [2024-12-12 06:45:21.812101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.457 [2024-12-12 06:45:21.812163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:c9bdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.457 [2024-12-12 06:45:21.812177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.457 [2024-12-12 06:45:21.812236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.457 [2024-12-12 06:45:21.812250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.457 #69 NEW cov: 12384 ft: 15433 corp: 29/876b lim: 40 exec/s: 69 rss: 74Mb L: 36/40 MS: 1 InsertRepeatedBytes- 00:07:14.457 [2024-12-12 06:45:21.851988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a400a28 cdw11:d8ffffd8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.457 [2024-12-12 06:45:21.852013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.457 [2024-12-12 06:45:21.852077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.457 [2024-12-12 06:45:21.852091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.457 [2024-12-12 06:45:21.852153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffff5b5b cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.457 [2024-12-12 06:45:21.852166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.457 #70 NEW cov: 12384 ft: 15450 corp: 30/906b lim: 40 exec/s: 70 rss: 74Mb L: 30/40 MS: 1 PersAutoDict- DE: "\377\377\377["- 00:07:14.457 [2024-12-12 06:45:21.912452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a28d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.457 [2024-12-12 06:45:21.912478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.457 [2024-12-12 06:45:21.912562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.457 [2024-12-12 06:45:21.912576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.457 [2024-12-12 06:45:21.912640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.457 [2024-12-12 06:45:21.912653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.457 [2024-12-12 06:45:21.912715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:26bdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.457 [2024-12-12 06:45:21.912729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.457 #71 NEW cov: 12384 ft: 15488 corp: 31/942b lim: 40 exec/s: 71 rss: 74Mb L: 36/40 MS: 1 InsertByte- 00:07:14.457 [2024-12-12 06:45:21.972628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a28d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.457 [2024-12-12 06:45:21.972654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.457 [2024-12-12 06:45:21.972717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.457 [2024-12-12 06:45:21.972730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.457 [2024-12-12 06:45:21.972790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.457 [2024-12-12 06:45:21.972803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.457 [2024-12-12 06:45:21.972864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.457 [2024-12-12 06:45:21.972877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.458 [2024-12-12 06:45:21.972938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:bdbdbdbd cdw11:d8d8ff5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.458 [2024-12-12 06:45:21.972951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:14.717 #72 NEW cov: 12384 ft: 15494 corp: 32/982b lim: 40 exec/s: 72 rss: 74Mb L: 40/40 MS: 1 CopyPart- 00:07:14.717 [2024-12-12 06:45:22.012575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a28d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.717 [2024-12-12 06:45:22.012601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.717 [2024-12-12 06:45:22.012661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:bdbdbdbd cdw11:bdc9c9c9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.717 [2024-12-12 06:45:22.012676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.717 [2024-12-12 06:45:22.012735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:c9bdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.717 [2024-12-12 06:45:22.012749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.717 [2024-12-12 06:45:22.012807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:bdbdbdbd cdw11:bdbdbdbd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.717 [2024-12-12 06:45:22.012824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.717 #73 NEW cov: 12384 ft: 15508 corp: 33/1018b lim: 40 exec/s: 73 rss: 74Mb L: 36/40 MS: 1 ChangeBit- 00:07:14.717 [2024-12-12 06:45:22.072596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a400a28 cdw11:d8ffffd8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.717 [2024-12-12 06:45:22.072623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.717 [2024-12-12 06:45:22.072686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.717 [2024-12-12 06:45:22.072701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.717 [2024-12-12 06:45:22.072764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff5b cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.717 [2024-12-12 06:45:22.072777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.717 #74 NEW cov: 12384 ft: 15520 corp: 34/1048b lim: 40 exec/s: 74 rss: 74Mb L: 30/40 MS: 1 ShuffleBytes- 00:07:14.717 [2024-12-12 06:45:22.112659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a400a28 cdw11:ffffd833 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.717 [2024-12-12 06:45:22.112686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.717 [2024-12-12 06:45:22.112748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2969bb3b cdw11:21020000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.717 [2024-12-12 06:45:22.112762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.717 [2024-12-12 06:45:22.112825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00d8d8d8 cdw11:d830d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.718 [2024-12-12 06:45:22.112839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.718 #75 NEW cov: 12384 ft: 15526 corp: 35/1079b lim: 40 exec/s: 75 rss: 74Mb L: 31/40 MS: 1 PersAutoDict- DE: "3)i\273;!\002\000"- 00:07:14.718 [2024-12-12 06:45:22.173212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a400a28 cdw11:ffffd8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.718 [2024-12-12 06:45:22.173238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.718 [2024-12-12 06:45:22.173301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.718 [2024-12-12 06:45:22.173314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.718 [2024-12-12 06:45:22.173392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:28ffffd8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.718 [2024-12-12 06:45:22.173407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.718 [2024-12-12 06:45:22.173468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:d8d8d8d8 cdw11:d8d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.718 [2024-12-12 06:45:22.173480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.718 #76 NEW cov: 12384 ft: 15562 corp: 36/1118b lim: 40 exec/s: 38 rss: 74Mb L: 39/40 MS: 1 CopyPart- 00:07:14.718 #76 DONE cov: 12384 ft: 15562 corp: 36/1118b lim: 40 exec/s: 38 rss: 74Mb 00:07:14.718 ###### Recommended dictionary. ###### 00:07:14.718 "\377\377\377[" # Uses: 5 00:07:14.718 "3)i\273;!\002\000" # Uses: 1 00:07:14.718 ###### End of recommended dictionary. ###### 00:07:14.718 Done 76 runs in 2 second(s) 00:07:14.977 06:45:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:07:14.977 06:45:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:14.977 06:45:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:14.977 06:45:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:07:14.977 06:45:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:07:14.977 06:45:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:14.977 06:45:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:14.977 06:45:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:14.977 06:45:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:07:14.977 06:45:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:14.977 06:45:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:14.977 06:45:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:07:14.977 06:45:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:07:14.977 06:45:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:14.977 06:45:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:07:14.977 06:45:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:14.977 06:45:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:14.977 06:45:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:14.977 06:45:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:07:14.977 [2024-12-12 06:45:22.346786] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:14.978 [2024-12-12 06:45:22.346855] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158086 ] 00:07:15.237 [2024-12-12 06:45:22.540205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.237 [2024-12-12 06:45:22.573768] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.237 [2024-12-12 06:45:22.632600] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.237 [2024-12-12 06:45:22.648926] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:07:15.237 INFO: Running with entropic power schedule (0xFF, 100). 00:07:15.237 INFO: Seed: 1161426942 00:07:15.237 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:15.237 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:15.237 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:15.237 INFO: A corpus is not provided, starting from an empty corpus 00:07:15.237 #2 INITED exec/s: 0 rss: 65Mb 00:07:15.237 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:15.237 This may also happen if the target rejected all inputs we tried so far 00:07:15.237 [2024-12-12 06:45:22.694545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:32ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.237 [2024-12-12 06:45:22.694573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.237 [2024-12-12 06:45:22.694647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.237 [2024-12-12 06:45:22.694660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.237 [2024-12-12 06:45:22.694714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.237 [2024-12-12 06:45:22.694727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.496 NEW_FUNC[1/717]: 0x44a818 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:07:15.496 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:15.496 #19 NEW cov: 12170 ft: 12169 corp: 2/26b lim: 40 exec/s: 0 rss: 72Mb L: 25/25 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:15.496 [2024-12-12 06:45:23.015183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:000affff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.496 [2024-12-12 06:45:23.015215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.496 [2024-12-12 06:45:23.015287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.496 [2024-12-12 06:45:23.015301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.756 #23 NEW cov: 12283 ft: 12916 corp: 3/44b lim: 40 exec/s: 0 rss: 72Mb L: 18/25 MS: 4 CopyPart-CopyPart-CMP-InsertRepeatedBytes- DE: "\000\000\000\000"- 00:07:15.756 [2024-12-12 06:45:23.055193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:000affff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.756 [2024-12-12 06:45:23.055225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.756 [2024-12-12 06:45:23.055297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.756 [2024-12-12 06:45:23.055312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.756 #24 NEW cov: 12289 ft: 13083 corp: 4/62b lim: 40 exec/s: 0 rss: 72Mb L: 18/25 MS: 1 CrossOver- 00:07:15.756 [2024-12-12 06:45:23.115525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.756 [2024-12-12 06:45:23.115552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.756 [2024-12-12 06:45:23.115609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.756 [2024-12-12 06:45:23.115622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.756 [2024-12-12 06:45:23.115676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.756 [2024-12-12 06:45:23.115689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.756 #25 NEW cov: 12374 ft: 13438 corp: 5/91b lim: 40 exec/s: 0 rss: 72Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:07:15.756 [2024-12-12 06:45:23.155612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:32ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.756 [2024-12-12 06:45:23.155637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.756 [2024-12-12 06:45:23.155694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:03000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.756 [2024-12-12 06:45:23.155708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.756 [2024-12-12 06:45:23.155763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.756 [2024-12-12 06:45:23.155776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.756 #26 NEW cov: 12374 ft: 13490 corp: 6/116b lim: 40 exec/s: 0 rss: 72Mb L: 25/29 MS: 1 CMP- DE: "\003\000\000\000"- 00:07:15.756 [2024-12-12 06:45:23.215483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.756 [2024-12-12 06:45:23.215509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.756 #29 NEW cov: 12374 ft: 14182 corp: 7/127b lim: 40 exec/s: 0 rss: 72Mb L: 11/29 MS: 3 ChangeBit-CrossOver-InsertRepeatedBytes- 00:07:15.756 [2024-12-12 06:45:23.255627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.756 [2024-12-12 06:45:23.255652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.016 #30 NEW cov: 12374 ft: 14328 corp: 8/135b lim: 40 exec/s: 0 rss: 72Mb L: 8/29 MS: 1 EraseBytes- 00:07:16.016 [2024-12-12 06:45:23.315927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.016 [2024-12-12 06:45:23.315952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.016 [2024-12-12 06:45:23.316007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.016 [2024-12-12 06:45:23.316020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.016 #31 NEW cov: 12374 ft: 14427 corp: 9/156b lim: 40 exec/s: 0 rss: 72Mb L: 21/29 MS: 1 EraseBytes- 00:07:16.016 [2024-12-12 06:45:23.376072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.016 [2024-12-12 06:45:23.376097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.016 [2024-12-12 06:45:23.376165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.016 [2024-12-12 06:45:23.376180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.016 #32 NEW cov: 12374 ft: 14443 corp: 10/174b lim: 40 exec/s: 0 rss: 72Mb L: 18/29 MS: 1 CopyPart- 00:07:16.016 [2024-12-12 06:45:23.416338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:32ffffff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.016 [2024-12-12 06:45:23.416364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.016 [2024-12-12 06:45:23.416424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.016 [2024-12-12 06:45:23.416438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.016 [2024-12-12 06:45:23.416493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.016 [2024-12-12 06:45:23.416506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.016 #33 NEW cov: 12374 ft: 14491 corp: 11/199b lim: 40 exec/s: 0 rss: 72Mb L: 25/29 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:07:16.016 [2024-12-12 06:45:23.476530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.016 [2024-12-12 06:45:23.476555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.016 [2024-12-12 06:45:23.476611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.016 [2024-12-12 06:45:23.476624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.016 [2024-12-12 06:45:23.476679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.016 [2024-12-12 06:45:23.476692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.016 #34 NEW cov: 12374 ft: 14509 corp: 12/225b lim: 40 exec/s: 0 rss: 72Mb L: 26/29 MS: 1 CMP- DE: "\002\000\000\000\000\000\000\000"- 00:07:16.016 [2024-12-12 06:45:23.536397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.016 [2024-12-12 06:45:23.536422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.275 #35 NEW cov: 12374 ft: 14541 corp: 13/236b lim: 40 exec/s: 0 rss: 72Mb L: 11/29 MS: 1 CrossOver- 00:07:16.275 [2024-12-12 06:45:23.576746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.275 [2024-12-12 06:45:23.576772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.275 [2024-12-12 06:45:23.576829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.275 [2024-12-12 06:45:23.576842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.275 [2024-12-12 06:45:23.576894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.275 [2024-12-12 06:45:23.576907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.275 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:16.275 #36 NEW cov: 12397 ft: 14581 corp: 14/265b lim: 40 exec/s: 0 rss: 73Mb L: 29/29 MS: 1 PersAutoDict- DE: "\002\000\000\000\000\000\000\000"- 00:07:16.275 [2024-12-12 06:45:23.616736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:000affff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.275 [2024-12-12 06:45:23.616762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.275 [2024-12-12 06:45:23.616821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:feffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.275 [2024-12-12 06:45:23.616836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.275 #37 NEW cov: 12397 ft: 14607 corp: 15/283b lim: 40 exec/s: 0 rss: 73Mb L: 18/29 MS: 1 ChangeBit- 00:07:16.275 [2024-12-12 06:45:23.676757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.275 [2024-12-12 06:45:23.676783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.275 #38 NEW cov: 12397 ft: 14624 corp: 16/294b lim: 40 exec/s: 38 rss: 73Mb L: 11/29 MS: 1 PersAutoDict- DE: "\002\000\000\000\000\000\000\000"- 00:07:16.275 [2024-12-12 06:45:23.737396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.275 [2024-12-12 06:45:23.737422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.275 [2024-12-12 06:45:23.737478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.275 [2024-12-12 06:45:23.737492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.275 [2024-12-12 06:45:23.737546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.275 [2024-12-12 06:45:23.737559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.275 [2024-12-12 06:45:23.737612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.275 [2024-12-12 06:45:23.737625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.275 #39 NEW cov: 12397 ft: 14938 corp: 17/329b lim: 40 exec/s: 39 rss: 73Mb L: 35/35 MS: 1 CrossOver- 00:07:16.275 [2024-12-12 06:45:23.777180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:000affff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.276 [2024-12-12 06:45:23.777205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.276 [2024-12-12 06:45:23.777262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:feff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.276 [2024-12-12 06:45:23.777275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.535 #40 NEW cov: 12397 ft: 14989 corp: 18/351b lim: 40 exec/s: 40 rss: 73Mb L: 22/35 MS: 1 CrossOver- 00:07:16.535 [2024-12-12 06:45:23.837500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:32ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.535 [2024-12-12 06:45:23.837527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.535 [2024-12-12 06:45:23.837601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:03000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.535 [2024-12-12 06:45:23.837615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.535 [2024-12-12 06:45:23.837667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffff28 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.535 [2024-12-12 06:45:23.837683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.535 #41 NEW cov: 12397 ft: 15016 corp: 19/376b lim: 40 exec/s: 41 rss: 73Mb L: 25/35 MS: 1 ChangeByte- 00:07:16.535 [2024-12-12 06:45:23.877610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:53000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.535 [2024-12-12 06:45:23.877635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.535 [2024-12-12 06:45:23.877692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.535 [2024-12-12 06:45:23.877705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.535 [2024-12-12 06:45:23.877759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.535 [2024-12-12 06:45:23.877772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.535 #47 NEW cov: 12397 ft: 15054 corp: 20/405b lim: 40 exec/s: 47 rss: 73Mb L: 29/35 MS: 1 ChangeByte- 00:07:16.535 [2024-12-12 06:45:23.917740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:32ffffff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.535 [2024-12-12 06:45:23.917765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.535 [2024-12-12 06:45:23.917823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000082 cdw11:82828282 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.535 [2024-12-12 06:45:23.917836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.535 [2024-12-12 06:45:23.917890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.535 [2024-12-12 06:45:23.917903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.535 #48 NEW cov: 12397 ft: 15082 corp: 21/435b lim: 40 exec/s: 48 rss: 73Mb L: 30/35 MS: 1 InsertRepeatedBytes- 00:07:16.535 [2024-12-12 06:45:23.977764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.535 [2024-12-12 06:45:23.977790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.535 [2024-12-12 06:45:23.977847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.535 [2024-12-12 06:45:23.977861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.535 #49 NEW cov: 12397 ft: 15108 corp: 22/454b lim: 40 exec/s: 49 rss: 73Mb L: 19/35 MS: 1 CopyPart- 00:07:16.535 [2024-12-12 06:45:24.017855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:000affff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.535 [2024-12-12 06:45:24.017879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.535 [2024-12-12 06:45:24.017934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffff25 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.535 [2024-12-12 06:45:24.017948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.535 #50 NEW cov: 12397 ft: 15146 corp: 23/472b lim: 40 exec/s: 50 rss: 73Mb L: 18/35 MS: 1 ChangeByte- 00:07:16.796 [2024-12-12 06:45:24.057862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:ff020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.796 [2024-12-12 06:45:24.057889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.796 #51 NEW cov: 12397 ft: 15165 corp: 24/487b lim: 40 exec/s: 51 rss: 73Mb L: 15/35 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:07:16.796 [2024-12-12 06:45:24.118161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.796 [2024-12-12 06:45:24.118203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.796 [2024-12-12 06:45:24.118261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00fb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.796 [2024-12-12 06:45:24.118275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.796 #52 NEW cov: 12397 ft: 15176 corp: 25/508b lim: 40 exec/s: 52 rss: 73Mb L: 21/35 MS: 1 ChangeBinInt- 00:07:16.796 [2024-12-12 06:45:24.178610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:32ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.796 [2024-12-12 06:45:24.178636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.796 [2024-12-12 06:45:24.178693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:03000064 cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.796 [2024-12-12 06:45:24.178707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.796 [2024-12-12 06:45:24.178761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:64646400 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.796 [2024-12-12 06:45:24.178775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.796 [2024-12-12 06:45:24.178828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffff28 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.796 [2024-12-12 06:45:24.178841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.796 #53 NEW cov: 12397 ft: 15198 corp: 26/541b lim: 40 exec/s: 53 rss: 73Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:07:16.796 [2024-12-12 06:45:24.238637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:32ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.796 [2024-12-12 06:45:24.238662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.796 [2024-12-12 06:45:24.238718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:03000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.796 [2024-12-12 06:45:24.238731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.796 [2024-12-12 06:45:24.238785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ceffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.796 [2024-12-12 06:45:24.238797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.796 #54 NEW cov: 12397 ft: 15211 corp: 27/567b lim: 40 exec/s: 54 rss: 73Mb L: 26/35 MS: 1 InsertByte- 00:07:16.796 [2024-12-12 06:45:24.278425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.796 [2024-12-12 06:45:24.278453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.796 #55 NEW cov: 12397 ft: 15229 corp: 28/578b lim: 40 exec/s: 55 rss: 73Mb L: 11/35 MS: 1 ChangeByte- 00:07:17.055 [2024-12-12 06:45:24.318836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:530000c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.055 [2024-12-12 06:45:24.318862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.055 [2024-12-12 06:45:24.318918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.055 [2024-12-12 06:45:24.318932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.055 [2024-12-12 06:45:24.318988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.056 [2024-12-12 06:45:24.319002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.056 #56 NEW cov: 12397 ft: 15247 corp: 29/607b lim: 40 exec/s: 56 rss: 73Mb L: 29/35 MS: 1 ChangeByte- 00:07:17.056 [2024-12-12 06:45:24.379016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:32ffff01 cdw11:000005ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.056 [2024-12-12 06:45:24.379041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.056 [2024-12-12 06:45:24.379098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.056 [2024-12-12 06:45:24.379112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.056 [2024-12-12 06:45:24.379167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.056 [2024-12-12 06:45:24.379197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.056 #57 NEW cov: 12397 ft: 15267 corp: 30/632b lim: 40 exec/s: 57 rss: 74Mb L: 25/35 MS: 1 ChangeBinInt- 00:07:17.056 [2024-12-12 06:45:24.418948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:000affff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.056 [2024-12-12 06:45:24.418973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.056 [2024-12-12 06:45:24.419028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1cffffff cdw11:25ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.056 [2024-12-12 06:45:24.419041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.056 #58 NEW cov: 12397 ft: 15272 corp: 31/651b lim: 40 exec/s: 58 rss: 74Mb L: 19/35 MS: 1 InsertByte- 00:07:17.056 [2024-12-12 06:45:24.479129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.056 [2024-12-12 06:45:24.479158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.056 [2024-12-12 06:45:24.479215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.056 [2024-12-12 06:45:24.479228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.056 #59 NEW cov: 12397 ft: 15274 corp: 32/669b lim: 40 exec/s: 59 rss: 74Mb L: 18/35 MS: 1 ChangeBit- 00:07:17.056 [2024-12-12 06:45:24.519272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:73000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.056 [2024-12-12 06:45:24.519297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.056 [2024-12-12 06:45:24.519354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00fb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.056 [2024-12-12 06:45:24.519367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.056 #60 NEW cov: 12397 ft: 15282 corp: 33/690b lim: 40 exec/s: 60 rss: 74Mb L: 21/35 MS: 1 ChangeByte- 00:07:17.316 [2024-12-12 06:45:24.579455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:000affff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.316 [2024-12-12 06:45:24.579480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.316 [2024-12-12 06:45:24.579554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffff00 cdw11:000012ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.316 [2024-12-12 06:45:24.579568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.316 #61 NEW cov: 12397 ft: 15294 corp: 34/708b lim: 40 exec/s: 61 rss: 74Mb L: 18/35 MS: 1 ChangeBinInt- 00:07:17.316 [2024-12-12 06:45:24.619379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:ff020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.316 [2024-12-12 06:45:24.619404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.316 #62 NEW cov: 12397 ft: 15295 corp: 35/723b lim: 40 exec/s: 62 rss: 74Mb L: 15/35 MS: 1 ShuffleBytes- 00:07:17.316 [2024-12-12 06:45:24.679846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:32ffff01 cdw11:000005ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.316 [2024-12-12 06:45:24.679871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.316 [2024-12-12 06:45:24.679928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.316 [2024-12-12 06:45:24.679941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.316 [2024-12-12 06:45:24.679995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff030000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.316 [2024-12-12 06:45:24.680008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.316 #63 NEW cov: 12397 ft: 15308 corp: 36/752b lim: 40 exec/s: 31 rss: 74Mb L: 29/35 MS: 1 PersAutoDict- DE: "\003\000\000\000"- 00:07:17.316 #63 DONE cov: 12397 ft: 15308 corp: 36/752b lim: 40 exec/s: 31 rss: 74Mb 00:07:17.316 ###### Recommended dictionary. ###### 00:07:17.316 "\000\000\000\000" # Uses: 2 00:07:17.316 "\003\000\000\000" # Uses: 1 00:07:17.316 "\002\000\000\000\000\000\000\000" # Uses: 2 00:07:17.316 ###### End of recommended dictionary. ###### 00:07:17.316 Done 63 runs in 2 second(s) 00:07:17.316 06:45:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:07:17.316 06:45:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:17.316 06:45:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:17.316 06:45:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:07:17.316 06:45:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:07:17.316 06:45:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:17.316 06:45:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:17.316 06:45:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:17.316 06:45:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:07:17.316 06:45:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:17.316 06:45:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:17.316 06:45:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:07:17.316 06:45:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:07:17.316 06:45:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:17.316 06:45:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:07:17.316 06:45:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:17.575 06:45:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:17.575 06:45:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:17.576 06:45:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:07:17.576 [2024-12-12 06:45:24.871998] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:17.576 [2024-12-12 06:45:24.872067] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158493 ] 00:07:17.576 [2024-12-12 06:45:25.056116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.576 [2024-12-12 06:45:25.089477] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.834 [2024-12-12 06:45:25.148351] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.834 [2024-12-12 06:45:25.164671] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:07:17.834 INFO: Running with entropic power schedule (0xFF, 100). 00:07:17.834 INFO: Seed: 3677433505 00:07:17.834 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:17.834 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:17.834 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:17.834 INFO: A corpus is not provided, starting from an empty corpus 00:07:17.834 #2 INITED exec/s: 0 rss: 65Mb 00:07:17.834 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:17.834 This may also happen if the target rejected all inputs we tried so far 00:07:17.834 [2024-12-12 06:45:25.210321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.834 [2024-12-12 06:45:25.210350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.834 [2024-12-12 06:45:25.210406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:80808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.834 [2024-12-12 06:45:25.210421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.834 [2024-12-12 06:45:25.210474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:80808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.834 [2024-12-12 06:45:25.210487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.093 NEW_FUNC[1/717]: 0x44c588 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:18.093 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:18.093 #12 NEW cov: 12160 ft: 12167 corp: 2/30b lim: 40 exec/s: 0 rss: 72Mb L: 29/29 MS: 5 ChangeBinInt-CrossOver-ChangeByte-ChangeBinInt-InsertRepeatedBytes- 00:07:18.093 [2024-12-12 06:45:25.521228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5e5e5e5e cdw11:5e5e5e5e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.093 [2024-12-12 06:45:25.521260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.093 [2024-12-12 06:45:25.521321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5e5e5e5e cdw11:5e5e5e5e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.093 [2024-12-12 06:45:25.521335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.093 [2024-12-12 06:45:25.521389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:5e5e5e5e cdw11:5e5e5e2a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.093 [2024-12-12 06:45:25.521402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.093 #15 NEW cov: 12281 ft: 12756 corp: 3/55b lim: 40 exec/s: 0 rss: 72Mb L: 25/29 MS: 3 ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:07:18.093 [2024-12-12 06:45:25.561262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.093 [2024-12-12 06:45:25.561289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.093 [2024-12-12 06:45:25.561350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:80808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.093 [2024-12-12 06:45:25.561364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.093 [2024-12-12 06:45:25.561419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:80808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.093 [2024-12-12 06:45:25.561432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.093 #16 NEW cov: 12287 ft: 12993 corp: 4/85b lim: 40 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 InsertByte- 00:07:18.352 [2024-12-12 06:45:25.621445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5e5e5e5e cdw11:5e5e5e5e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.352 [2024-12-12 06:45:25.621472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.352 [2024-12-12 06:45:25.621545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5e5e5e5e cdw11:5e5e5e5e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.352 [2024-12-12 06:45:25.621559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.352 [2024-12-12 06:45:25.621620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:5e5e5e5e cdw11:5e5e5e2a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.352 [2024-12-12 06:45:25.621634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.352 #17 NEW cov: 12372 ft: 13269 corp: 5/110b lim: 40 exec/s: 0 rss: 72Mb L: 25/30 MS: 1 ShuffleBytes- 00:07:18.352 [2024-12-12 06:45:25.681401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.352 [2024-12-12 06:45:25.681431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.352 [2024-12-12 06:45:25.681488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:80808080 cdw11:80ed8080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.352 [2024-12-12 06:45:25.681502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.352 #18 NEW cov: 12372 ft: 13663 corp: 6/127b lim: 40 exec/s: 0 rss: 72Mb L: 17/30 MS: 1 EraseBytes- 00:07:18.352 [2024-12-12 06:45:25.741744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.352 [2024-12-12 06:45:25.741771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.352 [2024-12-12 06:45:25.741828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5e5e5e5e cdw11:5e5e5e5e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.352 [2024-12-12 06:45:25.741842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.352 [2024-12-12 06:45:25.741897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:5e5e5e5e cdw11:5e5e5e2a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.352 [2024-12-12 06:45:25.741910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.352 #19 NEW cov: 12372 ft: 13747 corp: 7/152b lim: 40 exec/s: 0 rss: 72Mb L: 25/30 MS: 1 ChangeBinInt- 00:07:18.352 [2024-12-12 06:45:25.801771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.352 [2024-12-12 06:45:25.801797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.352 [2024-12-12 06:45:25.801873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5e5e5e5e cdw11:5e5e5e5e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.352 [2024-12-12 06:45:25.801887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.352 #20 NEW cov: 12372 ft: 13798 corp: 8/173b lim: 40 exec/s: 0 rss: 72Mb L: 21/30 MS: 1 EraseBytes- 00:07:18.352 [2024-12-12 06:45:25.862084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.352 [2024-12-12 06:45:25.862111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.352 [2024-12-12 06:45:25.862172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5e5e5e5e cdw11:5e285e5e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.352 [2024-12-12 06:45:25.862186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.352 [2024-12-12 06:45:25.862247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:5e5e5e5e cdw11:5e5e5e2a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.352 [2024-12-12 06:45:25.862261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.612 #21 NEW cov: 12372 ft: 13879 corp: 9/198b lim: 40 exec/s: 0 rss: 72Mb L: 25/30 MS: 1 ChangeByte- 00:07:18.612 [2024-12-12 06:45:25.902325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.612 [2024-12-12 06:45:25.902351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.612 [2024-12-12 06:45:25.902441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5e5e5e5e cdw11:5e285e5e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.612 [2024-12-12 06:45:25.902455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.612 [2024-12-12 06:45:25.902511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.612 [2024-12-12 06:45:25.902524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.612 [2024-12-12 06:45:25.902580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00005e5e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.612 [2024-12-12 06:45:25.902593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.612 #22 NEW cov: 12372 ft: 14213 corp: 10/237b lim: 40 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:18.612 [2024-12-12 06:45:25.962178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.612 [2024-12-12 06:45:25.962203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.612 [2024-12-12 06:45:25.962261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5e5e5e5e cdw11:5e5e5e60 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.612 [2024-12-12 06:45:25.962274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.612 #23 NEW cov: 12372 ft: 14260 corp: 11/259b lim: 40 exec/s: 0 rss: 73Mb L: 22/39 MS: 1 InsertByte- 00:07:18.612 [2024-12-12 06:45:26.022360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.612 [2024-12-12 06:45:26.022385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.612 [2024-12-12 06:45:26.022446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:80808080 cdw11:80ed8080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.612 [2024-12-12 06:45:26.022459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.612 #24 NEW cov: 12372 ft: 14270 corp: 12/276b lim: 40 exec/s: 0 rss: 73Mb L: 17/39 MS: 1 ShuffleBytes- 00:07:18.612 [2024-12-12 06:45:26.082533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.612 [2024-12-12 06:45:26.082558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.612 [2024-12-12 06:45:26.082618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:80808080 cdw11:80ed8000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.612 [2024-12-12 06:45:26.082632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.612 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:18.612 #25 NEW cov: 12395 ft: 14311 corp: 13/293b lim: 40 exec/s: 0 rss: 73Mb L: 17/39 MS: 1 CrossOver- 00:07:18.612 [2024-12-12 06:45:26.123139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.612 [2024-12-12 06:45:26.123168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.612 [2024-12-12 06:45:26.123227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5e5e5e5e cdw11:5e285e5e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.612 [2024-12-12 06:45:26.123244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.612 [2024-12-12 06:45:26.123301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00002600 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.612 [2024-12-12 06:45:26.123314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.612 [2024-12-12 06:45:26.123370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0000005e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.612 [2024-12-12 06:45:26.123383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.612 [2024-12-12 06:45:26.123438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:5e5e5e5e cdw11:5e5e2a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.612 [2024-12-12 06:45:26.123450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:18.872 #26 NEW cov: 12395 ft: 14410 corp: 14/333b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 InsertByte- 00:07:18.872 [2024-12-12 06:45:26.182982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.872 [2024-12-12 06:45:26.183008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.872 [2024-12-12 06:45:26.183068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:8080807f cdw11:7f7f7f7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.872 [2024-12-12 06:45:26.183082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.872 [2024-12-12 06:45:26.183137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:7f788080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.872 [2024-12-12 06:45:26.183155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.872 #27 NEW cov: 12395 ft: 14426 corp: 15/362b lim: 40 exec/s: 27 rss: 73Mb L: 29/40 MS: 1 ChangeBinInt- 00:07:18.872 [2024-12-12 06:45:26.223109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5e5e5e5e cdw11:5e5e5e5e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.872 [2024-12-12 06:45:26.223135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.872 [2024-12-12 06:45:26.223199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5e5e5e5e cdw11:5e5e4e5e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.872 [2024-12-12 06:45:26.223213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.872 [2024-12-12 06:45:26.223273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:5e5e5e5e cdw11:5e5e5e2a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.872 [2024-12-12 06:45:26.223286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.872 #28 NEW cov: 12395 ft: 14453 corp: 16/387b lim: 40 exec/s: 28 rss: 73Mb L: 25/40 MS: 1 ChangeBit- 00:07:18.872 [2024-12-12 06:45:26.263227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:017f8080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.872 [2024-12-12 06:45:26.263252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.872 [2024-12-12 06:45:26.263311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:80808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.872 [2024-12-12 06:45:26.263328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.872 [2024-12-12 06:45:26.263384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:80808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.872 [2024-12-12 06:45:26.263397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.872 #29 NEW cov: 12395 ft: 14475 corp: 17/417b lim: 40 exec/s: 29 rss: 73Mb L: 30/40 MS: 1 ChangeBinInt- 00:07:18.872 [2024-12-12 06:45:26.303347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01817f7f cdw11:7f7f7f7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.872 [2024-12-12 06:45:26.303372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.872 [2024-12-12 06:45:26.303449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:7f80807f cdw11:7f7f7f7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.872 [2024-12-12 06:45:26.303463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.872 [2024-12-12 06:45:26.303519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:7f788080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.872 [2024-12-12 06:45:26.303533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.872 #30 NEW cov: 12395 ft: 14493 corp: 18/446b lim: 40 exec/s: 30 rss: 73Mb L: 29/40 MS: 1 ChangeBinInt- 00:07:18.872 [2024-12-12 06:45:26.363391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01808084 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.872 [2024-12-12 06:45:26.363418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.872 [2024-12-12 06:45:26.363472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:80808080 cdw11:80ed8000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.872 [2024-12-12 06:45:26.363486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.132 #31 NEW cov: 12395 ft: 14549 corp: 19/463b lim: 40 exec/s: 31 rss: 73Mb L: 17/40 MS: 1 ChangeBinInt- 00:07:19.132 [2024-12-12 06:45:26.423678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.132 [2024-12-12 06:45:26.423704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.132 [2024-12-12 06:45:26.423763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:8080807f cdw11:7f7f7f7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.132 [2024-12-12 06:45:26.423777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.132 [2024-12-12 06:45:26.423836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:7f788080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.132 [2024-12-12 06:45:26.423850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.132 #32 NEW cov: 12395 ft: 14554 corp: 20/492b lim: 40 exec/s: 32 rss: 73Mb L: 29/40 MS: 1 ShuffleBytes- 00:07:19.132 [2024-12-12 06:45:26.463621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000019 cdw11:5e5e5e5e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.132 [2024-12-12 06:45:26.463647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.132 [2024-12-12 06:45:26.463725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5e285e5e cdw11:5e5e5e5e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.132 [2024-12-12 06:45:26.463740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.132 #33 NEW cov: 12395 ft: 14578 corp: 21/513b lim: 40 exec/s: 33 rss: 73Mb L: 21/40 MS: 1 EraseBytes- 00:07:19.132 [2024-12-12 06:45:26.503755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.132 [2024-12-12 06:45:26.503781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.132 [2024-12-12 06:45:26.503841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:195e5e5e cdw11:5e5e5e5e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.132 [2024-12-12 06:45:26.503855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.132 #35 NEW cov: 12395 ft: 14597 corp: 22/536b lim: 40 exec/s: 35 rss: 73Mb L: 23/40 MS: 2 ChangeBit-CrossOver- 00:07:19.132 [2024-12-12 06:45:26.543653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000019 cdw11:5e5e5e5e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.132 [2024-12-12 06:45:26.543678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.132 #36 NEW cov: 12395 ft: 15328 corp: 23/547b lim: 40 exec/s: 36 rss: 73Mb L: 11/40 MS: 1 EraseBytes- 00:07:19.132 [2024-12-12 06:45:26.604190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.132 [2024-12-12 06:45:26.604216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.132 [2024-12-12 06:45:26.604277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:80808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.132 [2024-12-12 06:45:26.604291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.132 [2024-12-12 06:45:26.604351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:803f8080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.132 [2024-12-12 06:45:26.604364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.132 #37 NEW cov: 12395 ft: 15367 corp: 24/576b lim: 40 exec/s: 37 rss: 73Mb L: 29/40 MS: 1 ChangeByte- 00:07:19.132 [2024-12-12 06:45:26.644131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5e5e5e5e cdw11:5e5e5e5e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.132 [2024-12-12 06:45:26.644163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.132 [2024-12-12 06:45:26.644222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5e5e5e5e cdw11:5e5e5e5e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.132 [2024-12-12 06:45:26.644236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.392 #38 NEW cov: 12395 ft: 15372 corp: 25/593b lim: 40 exec/s: 38 rss: 73Mb L: 17/40 MS: 1 CrossOver- 00:07:19.392 [2024-12-12 06:45:26.684446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.392 [2024-12-12 06:45:26.684473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.392 [2024-12-12 06:45:26.684531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:80808080 cdw11:88808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.392 [2024-12-12 06:45:26.684548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.392 [2024-12-12 06:45:26.684607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:80808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.392 [2024-12-12 06:45:26.684621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.392 #39 NEW cov: 12395 ft: 15377 corp: 26/623b lim: 40 exec/s: 39 rss: 73Mb L: 30/40 MS: 1 ChangeBit- 00:07:19.392 [2024-12-12 06:45:26.724547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.392 [2024-12-12 06:45:26.724573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.392 [2024-12-12 06:45:26.724633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:80808080 cdw11:88808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.392 [2024-12-12 06:45:26.724647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.392 [2024-12-12 06:45:26.724703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:80808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.392 [2024-12-12 06:45:26.724717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.392 #40 NEW cov: 12395 ft: 15384 corp: 27/654b lim: 40 exec/s: 40 rss: 73Mb L: 31/40 MS: 1 InsertByte- 00:07:19.392 [2024-12-12 06:45:26.784904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.392 [2024-12-12 06:45:26.784931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.392 [2024-12-12 06:45:26.784992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:8080807f cdw11:7f7f7f7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.392 [2024-12-12 06:45:26.785006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.392 [2024-12-12 06:45:26.785065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:cdcdcdcd cdw11:cdcdcd7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.392 [2024-12-12 06:45:26.785078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.392 [2024-12-12 06:45:26.785137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:78808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.392 [2024-12-12 06:45:26.785155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.392 #41 NEW cov: 12395 ft: 15393 corp: 28/690b lim: 40 exec/s: 41 rss: 73Mb L: 36/40 MS: 1 InsertRepeatedBytes- 00:07:19.392 [2024-12-12 06:45:26.824830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.392 [2024-12-12 06:45:26.824856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.392 [2024-12-12 06:45:26.824917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:80808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.392 [2024-12-12 06:45:26.824931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.393 [2024-12-12 06:45:26.824992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.393 [2024-12-12 06:45:26.825005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.393 #42 NEW cov: 12395 ft: 15427 corp: 29/719b lim: 40 exec/s: 42 rss: 73Mb L: 29/40 MS: 1 ChangeBit- 00:07:19.393 [2024-12-12 06:45:26.864925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0180807a cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.393 [2024-12-12 06:45:26.864951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.393 [2024-12-12 06:45:26.865011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:80808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.393 [2024-12-12 06:45:26.865025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.393 [2024-12-12 06:45:26.865084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:803f8080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.393 [2024-12-12 06:45:26.865097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.393 #43 NEW cov: 12395 ft: 15466 corp: 30/748b lim: 40 exec/s: 43 rss: 73Mb L: 29/40 MS: 1 ChangeByte- 00:07:19.652 [2024-12-12 06:45:26.925018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000005e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.652 [2024-12-12 06:45:26.925045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.652 [2024-12-12 06:45:26.925106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5e605e5e cdw11:5e5e2a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.652 [2024-12-12 06:45:26.925120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.652 #44 NEW cov: 12395 ft: 15482 corp: 31/764b lim: 40 exec/s: 44 rss: 74Mb L: 16/40 MS: 1 EraseBytes- 00:07:19.652 [2024-12-12 06:45:26.985187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000005e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.652 [2024-12-12 06:45:26.985213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.652 [2024-12-12 06:45:26.985275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5e605e5e cdw11:5e5e2a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.652 [2024-12-12 06:45:26.985289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.652 #45 NEW cov: 12395 ft: 15526 corp: 32/780b lim: 40 exec/s: 45 rss: 74Mb L: 16/40 MS: 1 ShuffleBytes- 00:07:19.652 [2024-12-12 06:45:27.045557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01808080 cdw11:00195e5e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.652 [2024-12-12 06:45:27.045584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.652 [2024-12-12 06:45:27.045644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5e5e5e80 cdw11:88808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.652 [2024-12-12 06:45:27.045658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.652 [2024-12-12 06:45:27.045717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:80808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.652 [2024-12-12 06:45:27.045730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.652 #46 NEW cov: 12395 ft: 15541 corp: 33/810b lim: 40 exec/s: 46 rss: 74Mb L: 30/40 MS: 1 CrossOver- 00:07:19.652 [2024-12-12 06:45:27.085660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.652 [2024-12-12 06:45:27.085685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.652 [2024-12-12 06:45:27.085745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:80808080 cdw11:88808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.652 [2024-12-12 06:45:27.085760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.652 [2024-12-12 06:45:27.085817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:80808080 cdw11:802a8080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.652 [2024-12-12 06:45:27.085831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.653 #47 NEW cov: 12395 ft: 15548 corp: 34/841b lim: 40 exec/s: 47 rss: 74Mb L: 31/40 MS: 1 InsertByte- 00:07:19.653 [2024-12-12 06:45:27.125405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a171717 cdw11:17171717 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.653 [2024-12-12 06:45:27.125430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.653 #51 NEW cov: 12395 ft: 15593 corp: 35/855b lim: 40 exec/s: 51 rss: 74Mb L: 14/40 MS: 4 CopyPart-ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:07:19.653 [2024-12-12 06:45:27.165566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01808080 cdw11:80808080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:19.653 [2024-12-12 06:45:27.165591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.912 #52 NEW cov: 12395 ft: 15662 corp: 36/868b lim: 40 exec/s: 26 rss: 74Mb L: 13/40 MS: 1 EraseBytes- 00:07:19.912 #52 DONE cov: 12395 ft: 15662 corp: 36/868b lim: 40 exec/s: 26 rss: 74Mb 00:07:19.912 Done 52 runs in 2 second(s) 00:07:19.912 06:45:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:07:19.912 06:45:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:19.912 06:45:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:19.912 06:45:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:19.912 06:45:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:19.912 06:45:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:19.912 06:45:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:19.912 06:45:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:19.912 06:45:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:19.912 06:45:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:19.912 06:45:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:19.912 06:45:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:07:19.912 06:45:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:07:19.912 06:45:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:19.912 06:45:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:19.912 06:45:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:19.912 06:45:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:19.912 06:45:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:19.912 06:45:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:07:19.912 [2024-12-12 06:45:27.358991] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:19.912 [2024-12-12 06:45:27.359060] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159023 ] 00:07:20.172 [2024-12-12 06:45:27.549153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.172 [2024-12-12 06:45:27.582094] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.172 [2024-12-12 06:45:27.640973] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.172 [2024-12-12 06:45:27.657299] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:20.172 INFO: Running with entropic power schedule (0xFF, 100). 00:07:20.172 INFO: Seed: 1875460604 00:07:20.172 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:20.172 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:20.172 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:20.172 INFO: A corpus is not provided, starting from an empty corpus 00:07:20.172 #2 INITED exec/s: 0 rss: 65Mb 00:07:20.172 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:20.172 This may also happen if the target rejected all inputs we tried so far 00:07:20.431 [2024-12-12 06:45:27.702822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.431 [2024-12-12 06:45:27.702851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.431 [2024-12-12 06:45:27.702906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.431 [2024-12-12 06:45:27.702920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.431 [2024-12-12 06:45:27.702972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.431 [2024-12-12 06:45:27.702985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.737 NEW_FUNC[1/716]: 0x44e158 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:20.737 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:20.737 #3 NEW cov: 12156 ft: 12147 corp: 2/25b lim: 40 exec/s: 0 rss: 71Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:07:20.737 [2024-12-12 06:45:28.023629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.737 [2024-12-12 06:45:28.023660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.737 [2024-12-12 06:45:28.023716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.737 [2024-12-12 06:45:28.023729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.737 [2024-12-12 06:45:28.023787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.737 [2024-12-12 06:45:28.023816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.737 #4 NEW cov: 12269 ft: 12644 corp: 3/49b lim: 40 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 ShuffleBytes- 00:07:20.737 [2024-12-12 06:45:28.083460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.737 [2024-12-12 06:45:28.083485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.737 #5 NEW cov: 12275 ft: 13199 corp: 4/62b lim: 40 exec/s: 0 rss: 72Mb L: 13/24 MS: 1 InsertRepeatedBytes- 00:07:20.737 [2024-12-12 06:45:28.123938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a06060a cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.737 [2024-12-12 06:45:28.123963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.737 [2024-12-12 06:45:28.124020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.737 [2024-12-12 06:45:28.124033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.737 [2024-12-12 06:45:28.124085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.737 [2024-12-12 06:45:28.124098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.737 [2024-12-12 06:45:28.124158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.737 [2024-12-12 06:45:28.124172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.737 #6 NEW cov: 12360 ft: 13920 corp: 5/101b lim: 40 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 CrossOver- 00:07:20.737 [2024-12-12 06:45:28.163915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.737 [2024-12-12 06:45:28.163940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.737 [2024-12-12 06:45:28.164013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.737 [2024-12-12 06:45:28.164026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.737 [2024-12-12 06:45:28.164082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:2b060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.737 [2024-12-12 06:45:28.164094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.737 #7 NEW cov: 12360 ft: 14094 corp: 6/125b lim: 40 exec/s: 0 rss: 72Mb L: 24/39 MS: 1 ChangeByte- 00:07:20.737 [2024-12-12 06:45:28.203798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:10073d00 cdw11:00000a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.737 [2024-12-12 06:45:28.203823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.034 #11 NEW cov: 12360 ft: 14247 corp: 7/133b lim: 40 exec/s: 0 rss: 72Mb L: 8/39 MS: 4 InsertByte-CMP-CrossOver-InsertByte- DE: "\007\000\000\000"- 00:07:21.034 [2024-12-12 06:45:28.243915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.034 [2024-12-12 06:45:28.243941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.034 #12 NEW cov: 12360 ft: 14342 corp: 8/146b lim: 40 exec/s: 0 rss: 72Mb L: 13/39 MS: 1 ShuffleBytes- 00:07:21.034 [2024-12-12 06:45:28.304190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.034 [2024-12-12 06:45:28.304216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.034 [2024-12-12 06:45:28.304287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:24000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.034 [2024-12-12 06:45:28.304300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.034 #13 NEW cov: 12360 ft: 14535 corp: 9/167b lim: 40 exec/s: 0 rss: 72Mb L: 21/39 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\001"- 00:07:21.034 [2024-12-12 06:45:28.344397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:10073dff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.034 [2024-12-12 06:45:28.344423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.034 [2024-12-12 06:45:28.344479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.034 [2024-12-12 06:45:28.344492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.034 [2024-12-12 06:45:28.344546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.034 [2024-12-12 06:45:28.344559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.034 #14 NEW cov: 12360 ft: 14586 corp: 10/193b lim: 40 exec/s: 0 rss: 72Mb L: 26/39 MS: 1 InsertRepeatedBytes- 00:07:21.034 [2024-12-12 06:45:28.404577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.034 [2024-12-12 06:45:28.404602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.034 [2024-12-12 06:45:28.404659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.034 [2024-12-12 06:45:28.404672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.035 [2024-12-12 06:45:28.404729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:06062406 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.035 [2024-12-12 06:45:28.404743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.035 #15 NEW cov: 12360 ft: 14646 corp: 11/217b lim: 40 exec/s: 0 rss: 72Mb L: 24/39 MS: 1 CrossOver- 00:07:21.035 [2024-12-12 06:45:28.464855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.035 [2024-12-12 06:45:28.464881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.035 [2024-12-12 06:45:28.464936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.035 [2024-12-12 06:45:28.464953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.035 [2024-12-12 06:45:28.465007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:06062400 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.035 [2024-12-12 06:45:28.465020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.035 [2024-12-12 06:45:28.465076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000106 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.035 [2024-12-12 06:45:28.465089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.035 #16 NEW cov: 12360 ft: 14670 corp: 12/249b lim: 40 exec/s: 0 rss: 72Mb L: 32/39 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\001"- 00:07:21.035 [2024-12-12 06:45:28.524785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.035 [2024-12-12 06:45:28.524811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.035 [2024-12-12 06:45:28.524869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:24000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.035 [2024-12-12 06:45:28.524883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.294 #17 NEW cov: 12360 ft: 14711 corp: 13/270b lim: 40 exec/s: 0 rss: 72Mb L: 21/39 MS: 1 ChangeBit- 00:07:21.294 [2024-12-12 06:45:28.584846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24000000 cdw11:0d242424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.294 [2024-12-12 06:45:28.584872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.294 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:21.295 #23 NEW cov: 12383 ft: 14751 corp: 14/283b lim: 40 exec/s: 0 rss: 73Mb L: 13/39 MS: 1 ChangeBinInt- 00:07:21.295 [2024-12-12 06:45:28.645247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.295 [2024-12-12 06:45:28.645274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.295 [2024-12-12 06:45:28.645329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.295 [2024-12-12 06:45:28.645342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.295 [2024-12-12 06:45:28.645397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:06062406 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.295 [2024-12-12 06:45:28.645410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.295 #24 NEW cov: 12383 ft: 14807 corp: 15/307b lim: 40 exec/s: 0 rss: 73Mb L: 24/39 MS: 1 ShuffleBytes- 00:07:21.295 [2024-12-12 06:45:28.685530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24240000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.295 [2024-12-12 06:45:28.685555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.295 [2024-12-12 06:45:28.685630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.295 [2024-12-12 06:45:28.685647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.295 [2024-12-12 06:45:28.685703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00002424 cdw11:24242424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.295 [2024-12-12 06:45:28.685717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.295 [2024-12-12 06:45:28.685772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:24000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.295 [2024-12-12 06:45:28.685786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.295 #25 NEW cov: 12383 ft: 14841 corp: 16/344b lim: 40 exec/s: 25 rss: 73Mb L: 37/39 MS: 1 InsertRepeatedBytes- 00:07:21.295 [2024-12-12 06:45:28.745665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.295 [2024-12-12 06:45:28.745691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.295 [2024-12-12 06:45:28.745749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:07000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.295 [2024-12-12 06:45:28.745763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.295 [2024-12-12 06:45:28.745817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:06062400 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.295 [2024-12-12 06:45:28.745831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.295 [2024-12-12 06:45:28.745887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000106 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.295 [2024-12-12 06:45:28.745901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.295 #26 NEW cov: 12383 ft: 14928 corp: 17/376b lim: 40 exec/s: 26 rss: 73Mb L: 32/39 MS: 1 PersAutoDict- DE: "\007\000\000\000"- 00:07:21.295 [2024-12-12 06:45:28.805579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.295 [2024-12-12 06:45:28.805605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.295 [2024-12-12 06:45:28.805664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:24000006 cdw11:0606000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.295 [2024-12-12 06:45:28.805677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.554 #27 NEW cov: 12383 ft: 14941 corp: 18/393b lim: 40 exec/s: 27 rss: 73Mb L: 17/39 MS: 1 CrossOver- 00:07:21.555 [2024-12-12 06:45:28.865856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.555 [2024-12-12 06:45:28.865881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.555 [2024-12-12 06:45:28.865942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.555 [2024-12-12 06:45:28.865956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.555 [2024-12-12 06:45:28.866016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:24060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.555 [2024-12-12 06:45:28.866030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.555 #28 NEW cov: 12383 ft: 14968 corp: 19/417b lim: 40 exec/s: 28 rss: 73Mb L: 24/39 MS: 1 CrossOver- 00:07:21.555 [2024-12-12 06:45:28.925872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.555 [2024-12-12 06:45:28.925897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.555 [2024-12-12 06:45:28.925955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:24000006 cdw11:06060005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.555 [2024-12-12 06:45:28.925968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.555 #29 NEW cov: 12383 ft: 14970 corp: 20/434b lim: 40 exec/s: 29 rss: 73Mb L: 17/39 MS: 1 ChangeByte- 00:07:21.555 [2024-12-12 06:45:28.986163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.555 [2024-12-12 06:45:28.986188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.555 [2024-12-12 06:45:28.986245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.555 [2024-12-12 06:45:28.986258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.555 [2024-12-12 06:45:28.986314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:24060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.555 [2024-12-12 06:45:28.986327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.555 #30 NEW cov: 12383 ft: 14992 corp: 21/458b lim: 40 exec/s: 30 rss: 73Mb L: 24/39 MS: 1 ShuffleBytes- 00:07:21.555 [2024-12-12 06:45:29.046585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.555 [2024-12-12 06:45:29.046610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.555 [2024-12-12 06:45:29.046666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:07000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.555 [2024-12-12 06:45:29.046679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.555 [2024-12-12 06:45:29.046735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:06062400 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.555 [2024-12-12 06:45:29.046748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.555 [2024-12-12 06:45:29.046800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.555 [2024-12-12 06:45:29.046813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.555 [2024-12-12 06:45:29.046870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000106 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.555 [2024-12-12 06:45:29.046883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:21.555 #31 NEW cov: 12383 ft: 15031 corp: 22/498b lim: 40 exec/s: 31 rss: 73Mb L: 40/40 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\001"- 00:07:21.815 [2024-12-12 06:45:29.086204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.086229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.815 #32 NEW cov: 12383 ft: 15070 corp: 23/510b lim: 40 exec/s: 32 rss: 73Mb L: 12/40 MS: 1 EraseBytes- 00:07:21.815 [2024-12-12 06:45:29.126568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:d5060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.126592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.815 [2024-12-12 06:45:29.126648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.126662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.815 [2024-12-12 06:45:29.126719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:2b060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.126733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.815 #33 NEW cov: 12383 ft: 15099 corp: 24/534b lim: 40 exec/s: 33 rss: 73Mb L: 24/40 MS: 1 ChangeByte- 00:07:21.815 [2024-12-12 06:45:29.166805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.166829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.815 [2024-12-12 06:45:29.166886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.166899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.815 [2024-12-12 06:45:29.166956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:06062400 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.166969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.815 [2024-12-12 06:45:29.167025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:4b4b4b4b cdw11:4b4b4b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.167037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.815 #34 NEW cov: 12383 ft: 15163 corp: 25/573b lim: 40 exec/s: 34 rss: 73Mb L: 39/40 MS: 1 InsertRepeatedBytes- 00:07:21.815 [2024-12-12 06:45:29.206904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:06ff0606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.206928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.815 [2024-12-12 06:45:29.207003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.207017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.815 [2024-12-12 06:45:29.207073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:06060624 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.207099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.815 [2024-12-12 06:45:29.207156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.207167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.815 #35 NEW cov: 12383 ft: 15229 corp: 26/606b lim: 40 exec/s: 35 rss: 73Mb L: 33/40 MS: 1 InsertByte- 00:07:21.815 [2024-12-12 06:45:29.246906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.246931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.815 [2024-12-12 06:45:29.246986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06062606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.246999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.815 [2024-12-12 06:45:29.247054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.247067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.815 #36 NEW cov: 12383 ft: 15234 corp: 27/630b lim: 40 exec/s: 36 rss: 73Mb L: 24/40 MS: 1 ChangeBit- 00:07:21.815 [2024-12-12 06:45:29.287147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.287175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.815 [2024-12-12 06:45:29.287234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060b06 cdw11:07000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.287247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.815 [2024-12-12 06:45:29.287303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:06062400 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.287316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.815 [2024-12-12 06:45:29.287373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000106 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.287386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.815 #37 NEW cov: 12383 ft: 15244 corp: 28/662b lim: 40 exec/s: 37 rss: 73Mb L: 32/40 MS: 1 ChangeBinInt- 00:07:21.815 [2024-12-12 06:45:29.327298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a06060a cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.327323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.815 [2024-12-12 06:45:29.327382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.327396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.815 [2024-12-12 06:45:29.327456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.327469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.815 [2024-12-12 06:45:29.327528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.815 [2024-12-12 06:45:29.327541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.075 #38 NEW cov: 12383 ft: 15265 corp: 29/694b lim: 40 exec/s: 38 rss: 73Mb L: 32/40 MS: 1 EraseBytes- 00:07:22.075 [2024-12-12 06:45:29.387226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000006 cdw11:06240000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.075 [2024-12-12 06:45:29.387251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.075 [2024-12-12 06:45:29.387309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00010606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.075 [2024-12-12 06:45:29.387323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.075 #39 NEW cov: 12383 ft: 15267 corp: 30/713b lim: 40 exec/s: 39 rss: 73Mb L: 19/40 MS: 1 EraseBytes- 00:07:22.075 [2024-12-12 06:45:29.427546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.075 [2024-12-12 06:45:29.427571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.075 [2024-12-12 06:45:29.427631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.075 [2024-12-12 06:45:29.427644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.075 [2024-12-12 06:45:29.427700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:062b0624 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.075 [2024-12-12 06:45:29.427713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.075 [2024-12-12 06:45:29.427769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.075 [2024-12-12 06:45:29.427781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.075 #40 NEW cov: 12383 ft: 15279 corp: 31/746b lim: 40 exec/s: 40 rss: 73Mb L: 33/40 MS: 1 InsertByte- 00:07:22.075 [2024-12-12 06:45:29.467545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.075 [2024-12-12 06:45:29.467570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.075 [2024-12-12 06:45:29.467626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.075 [2024-12-12 06:45:29.467639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.075 [2024-12-12 06:45:29.467696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:24060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.075 [2024-12-12 06:45:29.467709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.075 #41 NEW cov: 12383 ft: 15289 corp: 32/770b lim: 40 exec/s: 41 rss: 74Mb L: 24/40 MS: 1 ShuffleBytes- 00:07:22.075 [2024-12-12 06:45:29.527862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.075 [2024-12-12 06:45:29.527886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.075 [2024-12-12 06:45:29.527944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:07000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.075 [2024-12-12 06:45:29.527957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.075 [2024-12-12 06:45:29.528013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:06062400 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.075 [2024-12-12 06:45:29.528026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.075 [2024-12-12 06:45:29.528081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:64000106 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.075 [2024-12-12 06:45:29.528093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.075 #42 NEW cov: 12383 ft: 15315 corp: 33/802b lim: 40 exec/s: 42 rss: 74Mb L: 32/40 MS: 1 ChangeByte- 00:07:22.075 [2024-12-12 06:45:29.567797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24242424 cdw11:24242424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.075 [2024-12-12 06:45:29.567822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.075 [2024-12-12 06:45:29.567880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:24010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.075 [2024-12-12 06:45:29.567893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.075 [2024-12-12 06:45:29.567950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.075 [2024-12-12 06:45:29.567963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.075 #43 NEW cov: 12383 ft: 15324 corp: 34/831b lim: 40 exec/s: 43 rss: 74Mb L: 29/40 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:22.334 [2024-12-12 06:45:29.607923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a060606 cdw11:06f0f9f9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.334 [2024-12-12 06:45:29.607949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.334 [2024-12-12 06:45:29.608007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.334 [2024-12-12 06:45:29.608021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.334 [2024-12-12 06:45:29.608077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:2b060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.334 [2024-12-12 06:45:29.608090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.334 #44 NEW cov: 12383 ft: 15349 corp: 35/855b lim: 40 exec/s: 44 rss: 74Mb L: 24/40 MS: 1 ChangeBinInt- 00:07:22.335 [2024-12-12 06:45:29.647972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:10073dff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.335 [2024-12-12 06:45:29.648001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.335 [2024-12-12 06:45:29.648058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.335 [2024-12-12 06:45:29.648071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.335 [2024-12-12 06:45:29.648128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff000020 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.335 [2024-12-12 06:45:29.648141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.335 #45 NEW cov: 12383 ft: 15363 corp: 36/881b lim: 40 exec/s: 45 rss: 74Mb L: 26/40 MS: 1 ChangeBit- 00:07:22.335 [2024-12-12 06:45:29.708268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a06060a cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.335 [2024-12-12 06:45:29.708293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.335 [2024-12-12 06:45:29.708350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:060a0606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.335 [2024-12-12 06:45:29.708363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.335 [2024-12-12 06:45:29.708419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.335 [2024-12-12 06:45:29.708431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.335 [2024-12-12 06:45:29.708487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:06060606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.335 [2024-12-12 06:45:29.708499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.335 #46 NEW cov: 12383 ft: 15369 corp: 37/920b lim: 40 exec/s: 23 rss: 74Mb L: 39/40 MS: 1 ChangeByte- 00:07:22.335 #46 DONE cov: 12383 ft: 15369 corp: 37/920b lim: 40 exec/s: 23 rss: 74Mb 00:07:22.335 ###### Recommended dictionary. ###### 00:07:22.335 "\007\000\000\000" # Uses: 1 00:07:22.335 "\000\000\000\000\000\000\000\001" # Uses: 2 00:07:22.335 "\001\000\000\000\000\000\000\000" # Uses: 0 00:07:22.335 ###### End of recommended dictionary. ###### 00:07:22.335 Done 46 runs in 2 second(s) 00:07:22.335 06:45:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:22.335 06:45:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:22.335 06:45:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:22.335 06:45:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:22.335 06:45:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:22.335 06:45:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:22.335 06:45:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:22.335 06:45:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:22.335 06:45:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:22.335 06:45:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:22.335 06:45:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:22.335 06:45:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:22.335 06:45:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:22.335 06:45:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:22.335 06:45:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:22.335 06:45:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:22.335 06:45:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:22.335 06:45:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:22.335 06:45:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:22.594 [2024-12-12 06:45:29.877893] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:22.594 [2024-12-12 06:45:29.877977] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159341 ] 00:07:22.594 [2024-12-12 06:45:30.075402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.853 [2024-12-12 06:45:30.116820] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.853 [2024-12-12 06:45:30.176105] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.853 [2024-12-12 06:45:30.192417] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:22.853 INFO: Running with entropic power schedule (0xFF, 100). 00:07:22.853 INFO: Seed: 115502512 00:07:22.853 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:22.853 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:22.853 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:22.853 INFO: A corpus is not provided, starting from an empty corpus 00:07:22.853 #2 INITED exec/s: 0 rss: 66Mb 00:07:22.853 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:22.853 This may also happen if the target rejected all inputs we tried so far 00:07:22.853 [2024-12-12 06:45:30.268759] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.853 [2024-12-12 06:45:30.268806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.112 NEW_FUNC[1/717]: 0x44fd28 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:23.112 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:23.112 #27 NEW cov: 12150 ft: 12137 corp: 2/11b lim: 35 exec/s: 0 rss: 72Mb L: 10/10 MS: 5 CrossOver-ChangeBit-ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:07:23.112 [2024-12-12 06:45:30.609516] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.112 [2024-12-12 06:45:30.609557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.371 #28 NEW cov: 12263 ft: 12653 corp: 3/21b lim: 35 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:23.371 [2024-12-12 06:45:30.679654] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.371 [2024-12-12 06:45:30.679685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.371 #29 NEW cov: 12276 ft: 12917 corp: 4/32b lim: 35 exec/s: 0 rss: 72Mb L: 11/11 MS: 1 InsertByte- 00:07:23.371 [2024-12-12 06:45:30.749773] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.371 [2024-12-12 06:45:30.749803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.371 #30 NEW cov: 12361 ft: 13276 corp: 5/44b lim: 35 exec/s: 0 rss: 72Mb L: 12/12 MS: 1 InsertByte- 00:07:23.371 [2024-12-12 06:45:30.809839] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.371 [2024-12-12 06:45:30.809872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.371 #31 NEW cov: 12361 ft: 13478 corp: 6/57b lim: 35 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 CrossOver- 00:07:23.371 [2024-12-12 06:45:30.860078] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.371 [2024-12-12 06:45:30.860107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.371 #32 NEW cov: 12361 ft: 13553 corp: 7/68b lim: 35 exec/s: 0 rss: 72Mb L: 11/13 MS: 1 ChangeBinInt- 00:07:23.631 [2024-12-12 06:45:30.910619] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.631 [2024-12-12 06:45:30.910650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.631 [2024-12-12 06:45:30.910781] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.631 [2024-12-12 06:45:30.910799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.632 #35 NEW cov: 12361 ft: 14287 corp: 8/82b lim: 35 exec/s: 0 rss: 72Mb L: 14/14 MS: 3 InsertByte-ChangeBit-InsertRepeatedBytes- 00:07:23.632 [2024-12-12 06:45:30.970445] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.632 [2024-12-12 06:45:30.970478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.632 #36 NEW cov: 12361 ft: 14342 corp: 9/93b lim: 35 exec/s: 0 rss: 72Mb L: 11/14 MS: 1 InsertByte- 00:07:23.632 [2024-12-12 06:45:31.021228] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.632 [2024-12-12 06:45:31.021258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.632 [2024-12-12 06:45:31.021390] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.632 [2024-12-12 06:45:31.021416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.632 [2024-12-12 06:45:31.021542] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000a0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.632 [2024-12-12 06:45:31.021565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.632 #37 NEW cov: 12361 ft: 14568 corp: 10/115b lim: 35 exec/s: 0 rss: 73Mb L: 22/22 MS: 1 CrossOver- 00:07:23.632 [2024-12-12 06:45:31.090798] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.632 [2024-12-12 06:45:31.090828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.632 #38 NEW cov: 12361 ft: 14618 corp: 11/126b lim: 35 exec/s: 0 rss: 73Mb L: 11/22 MS: 1 CopyPart- 00:07:23.632 [2024-12-12 06:45:31.140906] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.632 [2024-12-12 06:45:31.140942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.891 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:23.891 #39 NEW cov: 12384 ft: 14659 corp: 12/137b lim: 35 exec/s: 0 rss: 73Mb L: 11/22 MS: 1 ChangeByte- 00:07:23.891 [2024-12-12 06:45:31.201607] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.891 [2024-12-12 06:45:31.201636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.892 [2024-12-12 06:45:31.201768] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.892 [2024-12-12 06:45:31.201794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.892 [2024-12-12 06:45:31.201932] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.892 [2024-12-12 06:45:31.201955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.892 #40 NEW cov: 12384 ft: 14725 corp: 13/162b lim: 35 exec/s: 40 rss: 73Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:23.892 [2024-12-12 06:45:31.261229] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.892 [2024-12-12 06:45:31.261264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.892 #41 NEW cov: 12384 ft: 14732 corp: 14/169b lim: 35 exec/s: 41 rss: 73Mb L: 7/25 MS: 1 CrossOver- 00:07:23.892 [2024-12-12 06:45:31.331521] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.892 [2024-12-12 06:45:31.331558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.892 #42 NEW cov: 12384 ft: 14748 corp: 15/180b lim: 35 exec/s: 42 rss: 73Mb L: 11/25 MS: 1 ChangeBinInt- 00:07:23.892 [2024-12-12 06:45:31.372472] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.892 [2024-12-12 06:45:31.372509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.892 [2024-12-12 06:45:31.372643] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.892 [2024-12-12 06:45:31.372673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.892 [2024-12-12 06:45:31.372803] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.892 [2024-12-12 06:45:31.372831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.892 [2024-12-12 06:45:31.372959] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000005d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.892 [2024-12-12 06:45:31.372988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.892 #43 NEW cov: 12384 ft: 15072 corp: 16/211b lim: 35 exec/s: 43 rss: 73Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:07:24.151 [2024-12-12 06:45:31.421750] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.151 [2024-12-12 06:45:31.421788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.151 #44 NEW cov: 12384 ft: 15085 corp: 17/222b lim: 35 exec/s: 44 rss: 73Mb L: 11/31 MS: 1 CopyPart- 00:07:24.151 NEW_FUNC[1/2]: 0x46a708 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:07:24.151 NEW_FUNC[2/2]: 0x138a5c8 in nvmf_ctrlr_set_features_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1607 00:07:24.151 #45 NEW cov: 12441 ft: 15205 corp: 18/235b lim: 35 exec/s: 45 rss: 73Mb L: 13/31 MS: 1 CMP- DE: "\001\000\000\000\002X\354\315"- 00:07:24.151 [2024-12-12 06:45:31.552210] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.151 [2024-12-12 06:45:31.552240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.151 #46 NEW cov: 12441 ft: 15228 corp: 19/246b lim: 35 exec/s: 46 rss: 73Mb L: 11/31 MS: 1 CrossOver- 00:07:24.151 [2024-12-12 06:45:31.622435] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.151 [2024-12-12 06:45:31.622470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.151 #47 NEW cov: 12441 ft: 15268 corp: 20/257b lim: 35 exec/s: 47 rss: 73Mb L: 11/31 MS: 1 ChangeBinInt- 00:07:24.410 [2024-12-12 06:45:31.692585] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.410 [2024-12-12 06:45:31.692617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.410 #48 NEW cov: 12441 ft: 15307 corp: 21/267b lim: 35 exec/s: 48 rss: 73Mb L: 10/31 MS: 1 ChangeBinInt- 00:07:24.410 [2024-12-12 06:45:31.732654] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.410 [2024-12-12 06:45:31.732682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.410 #49 NEW cov: 12441 ft: 15315 corp: 22/278b lim: 35 exec/s: 49 rss: 73Mb L: 11/31 MS: 1 ChangeByte- 00:07:24.410 [2024-12-12 06:45:31.802832] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000029 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.410 [2024-12-12 06:45:31.802865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.410 #50 NEW cov: 12441 ft: 15324 corp: 23/289b lim: 35 exec/s: 50 rss: 74Mb L: 11/31 MS: 1 ChangeByte- 00:07:24.410 [2024-12-12 06:45:31.863330] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.410 [2024-12-12 06:45:31.863357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.410 [2024-12-12 06:45:31.863488] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.410 [2024-12-12 06:45:31.863512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.410 #51 NEW cov: 12441 ft: 15381 corp: 24/306b lim: 35 exec/s: 51 rss: 74Mb L: 17/31 MS: 1 CrossOver- 00:07:24.410 [2024-12-12 06:45:31.913183] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.410 [2024-12-12 06:45:31.913217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.669 #52 NEW cov: 12441 ft: 15466 corp: 25/319b lim: 35 exec/s: 52 rss: 74Mb L: 13/31 MS: 1 ShuffleBytes- 00:07:24.669 [2024-12-12 06:45:31.963282] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.669 [2024-12-12 06:45:31.963313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.669 #58 NEW cov: 12441 ft: 15478 corp: 26/326b lim: 35 exec/s: 58 rss: 74Mb L: 7/31 MS: 1 EraseBytes- 00:07:24.669 [2024-12-12 06:45:32.003370] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.669 [2024-12-12 06:45:32.003400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.669 #59 NEW cov: 12441 ft: 15547 corp: 27/336b lim: 35 exec/s: 59 rss: 74Mb L: 10/31 MS: 1 ChangeByte- 00:07:24.669 [2024-12-12 06:45:32.053582] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.669 [2024-12-12 06:45:32.053617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.669 #60 NEW cov: 12441 ft: 15554 corp: 28/347b lim: 35 exec/s: 60 rss: 74Mb L: 11/31 MS: 1 InsertByte- 00:07:24.669 [2024-12-12 06:45:32.124087] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.669 [2024-12-12 06:45:32.124115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.669 [2024-12-12 06:45:32.124259] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.670 [2024-12-12 06:45:32.124276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.670 #61 NEW cov: 12441 ft: 15643 corp: 29/361b lim: 35 exec/s: 61 rss: 74Mb L: 14/31 MS: 1 ChangeBit- 00:07:24.670 [2024-12-12 06:45:32.173939] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.670 [2024-12-12 06:45:32.173968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.929 #62 NEW cov: 12441 ft: 15779 corp: 30/373b lim: 35 exec/s: 62 rss: 74Mb L: 12/31 MS: 1 InsertByte- 00:07:24.929 [2024-12-12 06:45:32.224917] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.929 [2024-12-12 06:45:32.224946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.929 [2024-12-12 06:45:32.225090] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.929 [2024-12-12 06:45:32.225107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.929 [2024-12-12 06:45:32.225246] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.929 [2024-12-12 06:45:32.225263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.929 [2024-12-12 06:45:32.225393] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.929 [2024-12-12 06:45:32.225410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.929 #63 NEW cov: 12441 ft: 15813 corp: 31/402b lim: 35 exec/s: 31 rss: 74Mb L: 29/31 MS: 1 InsertRepeatedBytes- 00:07:24.929 #63 DONE cov: 12441 ft: 15813 corp: 31/402b lim: 35 exec/s: 31 rss: 74Mb 00:07:24.929 ###### Recommended dictionary. ###### 00:07:24.929 "\001\000\000\000\002X\354\315" # Uses: 0 00:07:24.929 ###### End of recommended dictionary. ###### 00:07:24.929 Done 63 runs in 2 second(s) 00:07:24.929 06:45:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:24.929 06:45:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:24.929 06:45:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:24.929 06:45:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:24.929 06:45:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:24.929 06:45:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:24.929 06:45:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:24.929 06:45:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:24.929 06:45:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:24.929 06:45:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:24.929 06:45:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:24.929 06:45:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:24.929 06:45:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:24.929 06:45:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:24.929 06:45:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:24.929 06:45:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:24.929 06:45:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:24.929 06:45:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:24.929 06:45:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:24.929 [2024-12-12 06:45:32.398463] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:24.929 [2024-12-12 06:45:32.398531] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159849 ] 00:07:25.188 [2024-12-12 06:45:32.583888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.188 [2024-12-12 06:45:32.617260] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.188 [2024-12-12 06:45:32.676327] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.188 [2024-12-12 06:45:32.692625] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:25.188 INFO: Running with entropic power schedule (0xFF, 100). 00:07:25.188 INFO: Seed: 2614500893 00:07:25.448 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:25.448 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:25.448 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:25.448 INFO: A corpus is not provided, starting from an empty corpus 00:07:25.448 #2 INITED exec/s: 0 rss: 66Mb 00:07:25.448 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:25.448 This may also happen if the target rejected all inputs we tried so far 00:07:25.448 [2024-12-12 06:45:32.763579] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.448 [2024-12-12 06:45:32.763622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.448 [2024-12-12 06:45:32.763772] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.448 [2024-12-12 06:45:32.763791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.448 [2024-12-12 06:45:32.763951] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.448 [2024-12-12 06:45:32.763970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.706 NEW_FUNC[1/714]: 0x451268 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:25.707 NEW_FUNC[2/714]: 0x471278 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:25.707 #19 NEW cov: 12118 ft: 12118 corp: 2/31b lim: 35 exec/s: 0 rss: 72Mb L: 30/30 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:25.707 [2024-12-12 06:45:33.094465] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.707 [2024-12-12 06:45:33.094512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.707 [2024-12-12 06:45:33.094674] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.707 [2024-12-12 06:45:33.094698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.707 [2024-12-12 06:45:33.094853] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.707 [2024-12-12 06:45:33.094876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.707 NEW_FUNC[1/3]: 0x1082158 in posix_sock_read /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1527 00:07:25.707 NEW_FUNC[2/3]: 0x19c6aa8 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1575 00:07:25.707 #20 NEW cov: 12265 ft: 12886 corp: 3/61b lim: 35 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ChangeBinInt- 00:07:25.707 [2024-12-12 06:45:33.174850] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.707 [2024-12-12 06:45:33.174883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.707 [2024-12-12 06:45:33.175035] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.707 [2024-12-12 06:45:33.175055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.707 [2024-12-12 06:45:33.175161] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.707 [2024-12-12 06:45:33.175174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.707 #21 NEW cov: 12271 ft: 13133 corp: 4/91b lim: 35 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ChangeBinInt- 00:07:25.966 [2024-12-12 06:45:33.244690] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.966 [2024-12-12 06:45:33.244724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.966 [2024-12-12 06:45:33.244874] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.966 [2024-12-12 06:45:33.244894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.966 [2024-12-12 06:45:33.245050] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.966 [2024-12-12 06:45:33.245069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.966 #22 NEW cov: 12356 ft: 13341 corp: 5/121b lim: 35 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ChangeBinInt- 00:07:25.966 [2024-12-12 06:45:33.294989] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.966 [2024-12-12 06:45:33.295016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.966 [2024-12-12 06:45:33.295168] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.966 [2024-12-12 06:45:33.295186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.966 [2024-12-12 06:45:33.295328] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.966 [2024-12-12 06:45:33.295345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.966 #23 NEW cov: 12356 ft: 13400 corp: 6/151b lim: 35 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ShuffleBytes- 00:07:25.966 [2024-12-12 06:45:33.345037] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.966 [2024-12-12 06:45:33.345065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.966 [2024-12-12 06:45:33.345222] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.966 [2024-12-12 06:45:33.345242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.966 [2024-12-12 06:45:33.345391] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.966 [2024-12-12 06:45:33.345408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.966 #24 NEW cov: 12356 ft: 13488 corp: 7/182b lim: 35 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 InsertByte- 00:07:25.966 [2024-12-12 06:45:33.415180] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.966 [2024-12-12 06:45:33.415207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.966 [2024-12-12 06:45:33.415356] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.966 [2024-12-12 06:45:33.415374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.966 [2024-12-12 06:45:33.415507] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.967 [2024-12-12 06:45:33.415523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.967 #25 NEW cov: 12356 ft: 13537 corp: 8/212b lim: 35 exec/s: 0 rss: 73Mb L: 30/31 MS: 1 ShuffleBytes- 00:07:25.967 [2024-12-12 06:45:33.465312] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.967 [2024-12-12 06:45:33.465341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.967 [2024-12-12 06:45:33.465489] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.967 [2024-12-12 06:45:33.465509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.967 [2024-12-12 06:45:33.465654] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.967 [2024-12-12 06:45:33.465678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.226 #26 NEW cov: 12356 ft: 13590 corp: 9/242b lim: 35 exec/s: 0 rss: 73Mb L: 30/31 MS: 1 ChangeBinInt- 00:07:26.226 [2024-12-12 06:45:33.535085] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.226 [2024-12-12 06:45:33.535114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.226 [2024-12-12 06:45:33.535252] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.226 [2024-12-12 06:45:33.535269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.226 #29 NEW cov: 12356 ft: 14130 corp: 10/261b lim: 35 exec/s: 0 rss: 73Mb L: 19/31 MS: 3 ChangeBinInt-ChangeByte-InsertRepeatedBytes- 00:07:26.226 [2024-12-12 06:45:33.585688] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.226 [2024-12-12 06:45:33.585718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.226 [2024-12-12 06:45:33.585861] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.226 [2024-12-12 06:45:33.585881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.226 [2024-12-12 06:45:33.586021] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.226 [2024-12-12 06:45:33.586040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.226 #30 NEW cov: 12356 ft: 14201 corp: 11/292b lim: 35 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 InsertByte- 00:07:26.226 [2024-12-12 06:45:33.635815] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000374 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.226 [2024-12-12 06:45:33.635842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.226 [2024-12-12 06:45:33.635975] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000374 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.226 [2024-12-12 06:45:33.635995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.226 [2024-12-12 06:45:33.636125] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000374 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.226 [2024-12-12 06:45:33.636143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.226 [2024-12-12 06:45:33.636284] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000374 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.226 [2024-12-12 06:45:33.636302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.226 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:26.226 #32 NEW cov: 12379 ft: 14419 corp: 12/323b lim: 35 exec/s: 0 rss: 73Mb L: 31/31 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:26.226 [2024-12-12 06:45:33.686111] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.226 [2024-12-12 06:45:33.686139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.226 [2024-12-12 06:45:33.686281] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.226 [2024-12-12 06:45:33.686301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.226 [2024-12-12 06:45:33.686434] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.226 [2024-12-12 06:45:33.686452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.226 #33 NEW cov: 12379 ft: 14437 corp: 13/355b lim: 35 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 InsertByte- 00:07:26.485 [2024-12-12 06:45:33.756330] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.485 [2024-12-12 06:45:33.756357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.485 [2024-12-12 06:45:33.756502] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.485 [2024-12-12 06:45:33.756521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.485 [2024-12-12 06:45:33.756665] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.485 [2024-12-12 06:45:33.756684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.485 #34 NEW cov: 12379 ft: 14475 corp: 14/385b lim: 35 exec/s: 34 rss: 73Mb L: 30/32 MS: 1 ChangeBinInt- 00:07:26.485 [2024-12-12 06:45:33.826576] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.485 [2024-12-12 06:45:33.826605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.485 [2024-12-12 06:45:33.826749] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.485 [2024-12-12 06:45:33.826768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.485 [2024-12-12 06:45:33.826906] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.485 [2024-12-12 06:45:33.826923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.485 #35 NEW cov: 12379 ft: 14485 corp: 15/415b lim: 35 exec/s: 35 rss: 73Mb L: 30/32 MS: 1 ShuffleBytes- 00:07:26.486 [2024-12-12 06:45:33.896491] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000374 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.486 [2024-12-12 06:45:33.896518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.486 [2024-12-12 06:45:33.896654] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000374 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.486 [2024-12-12 06:45:33.896673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.486 [2024-12-12 06:45:33.896809] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000374 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.486 [2024-12-12 06:45:33.896826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.486 [2024-12-12 06:45:33.896964] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000374 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.486 [2024-12-12 06:45:33.896982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.486 #36 NEW cov: 12379 ft: 14554 corp: 16/446b lim: 35 exec/s: 36 rss: 73Mb L: 31/32 MS: 1 ShuffleBytes- 00:07:26.486 [2024-12-12 06:45:33.966309] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.486 [2024-12-12 06:45:33.966336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.486 [2024-12-12 06:45:33.966471] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.486 [2024-12-12 06:45:33.966488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.486 #37 NEW cov: 12379 ft: 14600 corp: 17/465b lim: 35 exec/s: 37 rss: 73Mb L: 19/32 MS: 1 ShuffleBytes- 00:07:26.745 [2024-12-12 06:45:34.036477] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.745 [2024-12-12 06:45:34.036504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.745 [2024-12-12 06:45:34.036642] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.745 [2024-12-12 06:45:34.036663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.745 #38 NEW cov: 12379 ft: 14632 corp: 18/484b lim: 35 exec/s: 38 rss: 73Mb L: 19/32 MS: 1 ShuffleBytes- 00:07:26.745 [2024-12-12 06:45:34.106846] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.745 [2024-12-12 06:45:34.106873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.745 #39 NEW cov: 12379 ft: 14772 corp: 19/500b lim: 35 exec/s: 39 rss: 74Mb L: 16/32 MS: 1 EraseBytes- 00:07:26.745 [2024-12-12 06:45:34.177714] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.745 [2024-12-12 06:45:34.177740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.745 [2024-12-12 06:45:34.177876] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.745 [2024-12-12 06:45:34.177896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.745 [2024-12-12 06:45:34.178034] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.745 [2024-12-12 06:45:34.178053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.745 NEW_FUNC[1/1]: 0x46ba88 in feat_power_management /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:282 00:07:26.745 #40 NEW cov: 12402 ft: 15094 corp: 20/535b lim: 35 exec/s: 40 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:26.745 [2024-12-12 06:45:34.227186] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.745 [2024-12-12 06:45:34.227216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.004 #41 NEW cov: 12402 ft: 15115 corp: 21/552b lim: 35 exec/s: 41 rss: 74Mb L: 17/35 MS: 1 InsertByte- 00:07:27.004 [2024-12-12 06:45:34.297625] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.004 [2024-12-12 06:45:34.297655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.004 [2024-12-12 06:45:34.297801] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.004 [2024-12-12 06:45:34.297824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.004 #42 NEW cov: 12402 ft: 15201 corp: 22/573b lim: 35 exec/s: 42 rss: 74Mb L: 21/35 MS: 1 EraseBytes- 00:07:27.004 [2024-12-12 06:45:34.368373] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.004 [2024-12-12 06:45:34.368402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.004 [2024-12-12 06:45:34.368543] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.004 [2024-12-12 06:45:34.368563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.004 [2024-12-12 06:45:34.368706] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.004 [2024-12-12 06:45:34.368726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.004 [2024-12-12 06:45:34.368870] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.004 [2024-12-12 06:45:34.368889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:27.004 #43 NEW cov: 12402 ft: 15203 corp: 23/608b lim: 35 exec/s: 43 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:07:27.004 [2024-12-12 06:45:34.438224] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000374 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.004 [2024-12-12 06:45:34.438254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.004 [2024-12-12 06:45:34.438402] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000374 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.004 [2024-12-12 06:45:34.438420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.005 [2024-12-12 06:45:34.438561] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000374 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.005 [2024-12-12 06:45:34.438579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.005 [2024-12-12 06:45:34.438723] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000374 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.005 [2024-12-12 06:45:34.438741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.005 #44 NEW cov: 12402 ft: 15214 corp: 24/641b lim: 35 exec/s: 44 rss: 74Mb L: 33/35 MS: 1 CopyPart- 00:07:27.005 [2024-12-12 06:45:34.487891] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.005 [2024-12-12 06:45:34.487921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.005 [2024-12-12 06:45:34.488068] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.005 [2024-12-12 06:45:34.488087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.264 #45 NEW cov: 12402 ft: 15233 corp: 25/660b lim: 35 exec/s: 45 rss: 74Mb L: 19/35 MS: 1 ChangeBinInt- 00:07:27.264 [2024-12-12 06:45:34.558753] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.265 [2024-12-12 06:45:34.558782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.265 [2024-12-12 06:45:34.558951] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.265 [2024-12-12 06:45:34.558970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.265 [2024-12-12 06:45:34.559127] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.265 [2024-12-12 06:45:34.559146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.265 #46 NEW cov: 12402 ft: 15240 corp: 26/690b lim: 35 exec/s: 46 rss: 74Mb L: 30/35 MS: 1 ChangeByte- 00:07:27.265 [2024-12-12 06:45:34.608782] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.265 [2024-12-12 06:45:34.608811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.265 [2024-12-12 06:45:34.608956] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.265 [2024-12-12 06:45:34.608976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.265 [2024-12-12 06:45:34.609121] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.265 [2024-12-12 06:45:34.609139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.265 #47 NEW cov: 12402 ft: 15259 corp: 27/720b lim: 35 exec/s: 47 rss: 74Mb L: 30/35 MS: 1 ChangeBinInt- 00:07:27.265 [2024-12-12 06:45:34.659015] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.265 [2024-12-12 06:45:34.659042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.265 [2024-12-12 06:45:34.659202] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.265 [2024-12-12 06:45:34.659222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.265 [2024-12-12 06:45:34.659360] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.265 [2024-12-12 06:45:34.659378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.265 #48 NEW cov: 12402 ft: 15368 corp: 28/752b lim: 35 exec/s: 48 rss: 74Mb L: 32/35 MS: 1 CrossOver- 00:07:27.265 [2024-12-12 06:45:34.709084] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000374 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.265 [2024-12-12 06:45:34.709112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.265 [2024-12-12 06:45:34.709254] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000374 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.265 [2024-12-12 06:45:34.709274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.265 [2024-12-12 06:45:34.709408] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000374 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.265 [2024-12-12 06:45:34.709427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.265 [2024-12-12 06:45:34.709579] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000374 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.265 [2024-12-12 06:45:34.709601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.265 #49 NEW cov: 12402 ft: 15479 corp: 29/784b lim: 35 exec/s: 24 rss: 74Mb L: 32/35 MS: 1 EraseBytes- 00:07:27.265 #49 DONE cov: 12402 ft: 15479 corp: 29/784b lim: 35 exec/s: 24 rss: 74Mb 00:07:27.265 Done 49 runs in 2 second(s) 00:07:27.524 06:45:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:27.524 06:45:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:27.524 06:45:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:27.524 06:45:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:27.524 06:45:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:27.524 06:45:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:27.524 06:45:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:27.524 06:45:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:27.524 06:45:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:27.524 06:45:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:27.524 06:45:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:27.524 06:45:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:27.524 06:45:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:27.524 06:45:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:27.524 06:45:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:27.524 06:45:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:27.524 06:45:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:27.524 06:45:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:27.524 06:45:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:27.524 [2024-12-12 06:45:34.911465] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:27.524 [2024-12-12 06:45:34.911543] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160378 ] 00:07:27.782 [2024-12-12 06:45:35.093362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.782 [2024-12-12 06:45:35.126784] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.782 [2024-12-12 06:45:35.185698] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.782 [2024-12-12 06:45:35.202006] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:27.782 INFO: Running with entropic power schedule (0xFF, 100). 00:07:27.782 INFO: Seed: 830536082 00:07:27.782 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:27.782 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:27.782 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:27.782 INFO: A corpus is not provided, starting from an empty corpus 00:07:27.782 #2 INITED exec/s: 0 rss: 66Mb 00:07:27.782 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:27.782 This may also happen if the target rejected all inputs we tried so far 00:07:27.782 [2024-12-12 06:45:35.257227] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.782 [2024-12-12 06:45:35.257258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.040 NEW_FUNC[1/717]: 0x452728 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:28.040 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:28.040 #31 NEW cov: 12242 ft: 12227 corp: 2/30b lim: 105 exec/s: 0 rss: 72Mb L: 29/29 MS: 4 CrossOver-ChangeBit-CopyPart-InsertRepeatedBytes- 00:07:28.300 [2024-12-12 06:45:35.577993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.300 [2024-12-12 06:45:35.578027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.300 #32 NEW cov: 12355 ft: 12571 corp: 3/59b lim: 105 exec/s: 0 rss: 72Mb L: 29/29 MS: 1 ChangeBinInt- 00:07:28.300 [2024-12-12 06:45:35.638134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.300 [2024-12-12 06:45:35.638167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.300 #33 NEW cov: 12361 ft: 12755 corp: 4/88b lim: 105 exec/s: 0 rss: 73Mb L: 29/29 MS: 1 CrossOver- 00:07:28.300 [2024-12-12 06:45:35.698376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.300 [2024-12-12 06:45:35.698403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.300 [2024-12-12 06:45:35.698461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:256 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.300 [2024-12-12 06:45:35.698477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.300 #34 NEW cov: 12446 ft: 13641 corp: 5/132b lim: 105 exec/s: 0 rss: 73Mb L: 44/44 MS: 1 CrossOver- 00:07:28.300 [2024-12-12 06:45:35.758440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.300 [2024-12-12 06:45:35.758468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.300 #35 NEW cov: 12446 ft: 13845 corp: 6/156b lim: 105 exec/s: 0 rss: 73Mb L: 24/44 MS: 1 EraseBytes- 00:07:28.300 [2024-12-12 06:45:35.798765] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.300 [2024-12-12 06:45:35.798791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.300 [2024-12-12 06:45:35.798829] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.300 [2024-12-12 06:45:35.798844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.300 [2024-12-12 06:45:35.798900] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.300 [2024-12-12 06:45:35.798917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.559 #36 NEW cov: 12446 ft: 14269 corp: 7/234b lim: 105 exec/s: 0 rss: 73Mb L: 78/78 MS: 1 InsertRepeatedBytes- 00:07:28.559 [2024-12-12 06:45:35.859057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65339 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.559 [2024-12-12 06:45:35.859088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.559 [2024-12-12 06:45:35.859129] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4195730024608447034 len:14907 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.559 [2024-12-12 06:45:35.859145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.559 [2024-12-12 06:45:35.859204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4195730024608447034 len:14907 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.559 [2024-12-12 06:45:35.859218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.559 [2024-12-12 06:45:35.859289] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:7425 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.559 [2024-12-12 06:45:35.859305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.559 #37 NEW cov: 12446 ft: 14889 corp: 8/321b lim: 105 exec/s: 0 rss: 73Mb L: 87/87 MS: 1 InsertRepeatedBytes- 00:07:28.559 [2024-12-12 06:45:35.898921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.559 [2024-12-12 06:45:35.898949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.559 [2024-12-12 06:45:35.899016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:256 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.559 [2024-12-12 06:45:35.899033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.559 #38 NEW cov: 12446 ft: 14946 corp: 9/366b lim: 105 exec/s: 0 rss: 73Mb L: 45/87 MS: 1 InsertByte- 00:07:28.559 [2024-12-12 06:45:35.939067] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18410715276690587647 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.559 [2024-12-12 06:45:35.939095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.559 [2024-12-12 06:45:35.939155] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:256 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.559 [2024-12-12 06:45:35.939171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.559 #39 NEW cov: 12446 ft: 14962 corp: 10/410b lim: 105 exec/s: 0 rss: 73Mb L: 44/87 MS: 1 ChangeBit- 00:07:28.559 [2024-12-12 06:45:35.979042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.559 [2024-12-12 06:45:35.979069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.559 #40 NEW cov: 12446 ft: 15010 corp: 11/433b lim: 105 exec/s: 0 rss: 73Mb L: 23/87 MS: 1 EraseBytes- 00:07:28.559 [2024-12-12 06:45:36.019101] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4278190080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.559 [2024-12-12 06:45:36.019129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.559 #42 NEW cov: 12446 ft: 15115 corp: 12/460b lim: 105 exec/s: 0 rss: 73Mb L: 27/87 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:28.559 [2024-12-12 06:45:36.059240] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446743077277138943 len:256 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.559 [2024-12-12 06:45:36.059272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.818 #43 NEW cov: 12446 ft: 15159 corp: 13/483b lim: 105 exec/s: 0 rss: 73Mb L: 23/87 MS: 1 ChangeBinInt- 00:07:28.818 [2024-12-12 06:45:36.119656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18410715276690587647 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.818 [2024-12-12 06:45:36.119685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.818 [2024-12-12 06:45:36.119732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:256 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.818 [2024-12-12 06:45:36.119749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.818 [2024-12-12 06:45:36.119807] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.818 [2024-12-12 06:45:36.119824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.818 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:28.818 #44 NEW cov: 12469 ft: 15202 corp: 14/565b lim: 105 exec/s: 0 rss: 73Mb L: 82/87 MS: 1 InsertRepeatedBytes- 00:07:28.818 [2024-12-12 06:45:36.179828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.818 [2024-12-12 06:45:36.179855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.818 [2024-12-12 06:45:36.179906] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.818 [2024-12-12 06:45:36.179921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.818 [2024-12-12 06:45:36.179977] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.818 [2024-12-12 06:45:36.179993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.818 #45 NEW cov: 12469 ft: 15232 corp: 15/638b lim: 105 exec/s: 0 rss: 73Mb L: 73/87 MS: 1 CrossOver- 00:07:28.818 [2024-12-12 06:45:36.219797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.818 [2024-12-12 06:45:36.219826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.818 [2024-12-12 06:45:36.219895] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:256 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.818 [2024-12-12 06:45:36.219910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.818 #46 NEW cov: 12469 ft: 15259 corp: 16/682b lim: 105 exec/s: 46 rss: 73Mb L: 44/87 MS: 1 CopyPart- 00:07:28.818 [2024-12-12 06:45:36.260047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.818 [2024-12-12 06:45:36.260074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.818 [2024-12-12 06:45:36.260137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.818 [2024-12-12 06:45:36.260163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.818 [2024-12-12 06:45:36.260222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.818 [2024-12-12 06:45:36.260248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.818 #52 NEW cov: 12469 ft: 15283 corp: 17/760b lim: 105 exec/s: 52 rss: 73Mb L: 78/87 MS: 1 ChangeBinInt- 00:07:28.818 [2024-12-12 06:45:36.320238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18410715276690587647 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.818 [2024-12-12 06:45:36.320265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.818 [2024-12-12 06:45:36.320311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:256 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.818 [2024-12-12 06:45:36.320328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.818 [2024-12-12 06:45:36.320385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.818 [2024-12-12 06:45:36.320402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.077 #53 NEW cov: 12469 ft: 15306 corp: 18/836b lim: 105 exec/s: 53 rss: 73Mb L: 76/87 MS: 1 EraseBytes- 00:07:29.077 [2024-12-12 06:45:36.380127] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.077 [2024-12-12 06:45:36.380157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.077 #54 NEW cov: 12469 ft: 15315 corp: 19/860b lim: 105 exec/s: 54 rss: 73Mb L: 24/87 MS: 1 ChangeBinInt- 00:07:29.077 [2024-12-12 06:45:36.440400] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073694675199 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.077 [2024-12-12 06:45:36.440428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.077 [2024-12-12 06:45:36.440466] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:2563 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.077 [2024-12-12 06:45:36.440482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.077 #55 NEW cov: 12469 ft: 15337 corp: 20/904b lim: 105 exec/s: 55 rss: 73Mb L: 44/87 MS: 1 CopyPart- 00:07:29.077 [2024-12-12 06:45:36.480429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8463800222054970741 len:30070 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.077 [2024-12-12 06:45:36.480457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.077 #59 NEW cov: 12469 ft: 15463 corp: 21/940b lim: 105 exec/s: 59 rss: 73Mb L: 36/87 MS: 4 CrossOver-ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:07:29.077 [2024-12-12 06:45:36.520531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8463800222054970741 len:30070 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.077 [2024-12-12 06:45:36.520557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.077 #60 NEW cov: 12469 ft: 15500 corp: 22/977b lim: 105 exec/s: 60 rss: 74Mb L: 37/87 MS: 1 InsertByte- 00:07:29.077 [2024-12-12 06:45:36.580686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.077 [2024-12-12 06:45:36.580716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.336 #61 NEW cov: 12469 ft: 15545 corp: 23/1013b lim: 105 exec/s: 61 rss: 74Mb L: 36/87 MS: 1 InsertRepeatedBytes- 00:07:29.337 [2024-12-12 06:45:36.621132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65339 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.337 [2024-12-12 06:45:36.621164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.337 [2024-12-12 06:45:36.621234] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4195730024608447034 len:14907 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.337 [2024-12-12 06:45:36.621249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.337 [2024-12-12 06:45:36.621305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4195730024608447034 len:14907 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.337 [2024-12-12 06:45:36.621321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.337 [2024-12-12 06:45:36.621379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:7425 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.337 [2024-12-12 06:45:36.621396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.337 #62 NEW cov: 12469 ft: 15593 corp: 24/1100b lim: 105 exec/s: 62 rss: 74Mb L: 87/87 MS: 1 ChangeBit- 00:07:29.337 [2024-12-12 06:45:36.680952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8463800222054970741 len:30070 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.337 [2024-12-12 06:45:36.680980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.337 #63 NEW cov: 12469 ft: 15612 corp: 25/1137b lim: 105 exec/s: 63 rss: 74Mb L: 37/87 MS: 1 ShuffleBytes- 00:07:29.337 [2024-12-12 06:45:36.741248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18374967954648334335 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.337 [2024-12-12 06:45:36.741274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.337 [2024-12-12 06:45:36.741341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:7425 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.337 [2024-12-12 06:45:36.741359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.337 #64 NEW cov: 12469 ft: 15637 corp: 26/1183b lim: 105 exec/s: 64 rss: 74Mb L: 46/87 MS: 1 InsertByte- 00:07:29.337 [2024-12-12 06:45:36.801444] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65339 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.337 [2024-12-12 06:45:36.801472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.337 [2024-12-12 06:45:36.801525] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.337 [2024-12-12 06:45:36.801542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.337 #65 NEW cov: 12469 ft: 15662 corp: 27/1230b lim: 105 exec/s: 65 rss: 74Mb L: 47/87 MS: 1 EraseBytes- 00:07:29.596 [2024-12-12 06:45:36.861483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.596 [2024-12-12 06:45:36.861511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.596 #66 NEW cov: 12469 ft: 15672 corp: 28/1259b lim: 105 exec/s: 66 rss: 74Mb L: 29/87 MS: 1 ChangeBit- 00:07:29.596 [2024-12-12 06:45:36.901584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:32768 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.596 [2024-12-12 06:45:36.901610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.596 #67 NEW cov: 12469 ft: 15679 corp: 29/1282b lim: 105 exec/s: 67 rss: 74Mb L: 23/87 MS: 1 ChangeBit- 00:07:29.596 [2024-12-12 06:45:36.941937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.596 [2024-12-12 06:45:36.941964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.596 [2024-12-12 06:45:36.942027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.596 [2024-12-12 06:45:36.942043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.596 [2024-12-12 06:45:36.942101] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.596 [2024-12-12 06:45:36.942118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.596 #68 NEW cov: 12469 ft: 15687 corp: 30/1355b lim: 105 exec/s: 68 rss: 74Mb L: 73/87 MS: 1 ShuffleBytes- 00:07:29.596 [2024-12-12 06:45:37.002010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.596 [2024-12-12 06:45:37.002037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.596 [2024-12-12 06:45:37.002105] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709493759 len:21075 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.596 [2024-12-12 06:45:37.002121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.596 #69 NEW cov: 12469 ft: 15712 corp: 31/1413b lim: 105 exec/s: 69 rss: 74Mb L: 58/87 MS: 1 CrossOver- 00:07:29.596 [2024-12-12 06:45:37.062193] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:9445664644854185983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.596 [2024-12-12 06:45:37.062218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.596 [2024-12-12 06:45:37.062274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1099511627543 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.596 [2024-12-12 06:45:37.062290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.596 #74 NEW cov: 12469 ft: 15724 corp: 32/1455b lim: 105 exec/s: 74 rss: 74Mb L: 42/87 MS: 5 EraseBytes-CMP-ChangeBinInt-CMP-CrossOver- DE: "\203\025\275\362D!\002\000"-"\017\000\000\000"- 00:07:29.856 [2024-12-12 06:45:37.122392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073694675199 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.856 [2024-12-12 06:45:37.122420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.856 [2024-12-12 06:45:37.122476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551605 len:65291 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.856 [2024-12-12 06:45:37.122493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.856 #75 NEW cov: 12469 ft: 15732 corp: 33/1500b lim: 105 exec/s: 75 rss: 74Mb L: 45/87 MS: 1 InsertByte- 00:07:29.856 [2024-12-12 06:45:37.182403] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.856 [2024-12-12 06:45:37.182431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.856 #76 NEW cov: 12469 ft: 15763 corp: 34/1530b lim: 105 exec/s: 76 rss: 74Mb L: 30/87 MS: 1 InsertByte- 00:07:29.856 [2024-12-12 06:45:37.222740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.856 [2024-12-12 06:45:37.222768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.856 [2024-12-12 06:45:37.222813] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18158513697557839871 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.856 [2024-12-12 06:45:37.222830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.856 [2024-12-12 06:45:37.222885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:29.856 [2024-12-12 06:45:37.222918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.856 #77 NEW cov: 12469 ft: 15770 corp: 35/1604b lim: 105 exec/s: 38 rss: 74Mb L: 74/87 MS: 1 InsertByte- 00:07:29.856 #77 DONE cov: 12469 ft: 15770 corp: 35/1604b lim: 105 exec/s: 38 rss: 74Mb 00:07:29.856 ###### Recommended dictionary. ###### 00:07:29.856 "\203\025\275\362D!\002\000" # Uses: 0 00:07:29.856 "\017\000\000\000" # Uses: 0 00:07:29.856 ###### End of recommended dictionary. ###### 00:07:29.856 Done 77 runs in 2 second(s) 00:07:29.856 06:45:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:29.856 06:45:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:29.856 06:45:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.856 06:45:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:29.856 06:45:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:29.856 06:45:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:29.856 06:45:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:29.856 06:45:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:29.856 06:45:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:29.856 06:45:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:29.856 06:45:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:29.856 06:45:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:29.856 06:45:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:29.856 06:45:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:29.856 06:45:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:29.856 06:45:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:29.856 06:45:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:29.856 06:45:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:29.856 06:45:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:07:30.115 [2024-12-12 06:45:37.394792] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:30.115 [2024-12-12 06:45:37.394859] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160670 ] 00:07:30.115 [2024-12-12 06:45:37.586425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.115 [2024-12-12 06:45:37.621155] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.374 [2024-12-12 06:45:37.680348] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.374 [2024-12-12 06:45:37.696658] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:30.374 INFO: Running with entropic power schedule (0xFF, 100). 00:07:30.374 INFO: Seed: 3324533058 00:07:30.374 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:30.374 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:30.374 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:30.374 INFO: A corpus is not provided, starting from an empty corpus 00:07:30.374 #2 INITED exec/s: 0 rss: 65Mb 00:07:30.374 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:30.374 This may also happen if the target rejected all inputs we tried so far 00:07:30.374 [2024-12-12 06:45:37.762674] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.374 [2024-12-12 06:45:37.762711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.633 NEW_FUNC[1/717]: 0x455aa8 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:30.633 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:30.633 #19 NEW cov: 12262 ft: 12253 corp: 2/34b lim: 120 exec/s: 0 rss: 72Mb L: 33/33 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:30.633 [2024-12-12 06:45:38.093439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.633 [2024-12-12 06:45:38.093483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.633 NEW_FUNC[1/1]: 0x14ff7e8 in nvmf_tgroup_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:576 00:07:30.633 #20 NEW cov: 12376 ft: 12593 corp: 3/67b lim: 120 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 ChangeBinInt- 00:07:30.892 [2024-12-12 06:45:38.163674] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.892 [2024-12-12 06:45:38.163707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.892 #21 NEW cov: 12382 ft: 12868 corp: 4/100b lim: 120 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 ChangeBit- 00:07:30.892 [2024-12-12 06:45:38.213755] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.892 [2024-12-12 06:45:38.213787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.892 #22 NEW cov: 12467 ft: 13164 corp: 5/133b lim: 120 exec/s: 0 rss: 72Mb L: 33/33 MS: 1 CopyPart- 00:07:30.892 [2024-12-12 06:45:38.284453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.892 [2024-12-12 06:45:38.284488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.892 [2024-12-12 06:45:38.284609] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.892 [2024-12-12 06:45:38.284633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.892 [2024-12-12 06:45:38.284754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.892 [2024-12-12 06:45:38.284780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.892 #23 NEW cov: 12467 ft: 14055 corp: 6/207b lim: 120 exec/s: 0 rss: 72Mb L: 74/74 MS: 1 InsertRepeatedBytes- 00:07:30.892 [2024-12-12 06:45:38.354708] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.892 [2024-12-12 06:45:38.354742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.892 [2024-12-12 06:45:38.354855] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.892 [2024-12-12 06:45:38.354892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.892 [2024-12-12 06:45:38.355016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.892 [2024-12-12 06:45:38.355037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.892 #24 NEW cov: 12467 ft: 14241 corp: 7/281b lim: 120 exec/s: 0 rss: 72Mb L: 74/74 MS: 1 ChangeBinInt- 00:07:31.151 [2024-12-12 06:45:38.424436] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.151 [2024-12-12 06:45:38.424463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.151 #25 NEW cov: 12467 ft: 14321 corp: 8/314b lim: 120 exec/s: 0 rss: 72Mb L: 33/74 MS: 1 ChangeBit- 00:07:31.151 [2024-12-12 06:45:38.474493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.151 [2024-12-12 06:45:38.474525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.151 #26 NEW cov: 12467 ft: 14352 corp: 9/347b lim: 120 exec/s: 0 rss: 72Mb L: 33/74 MS: 1 ChangeBinInt- 00:07:31.151 [2024-12-12 06:45:38.524641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35718 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.151 [2024-12-12 06:45:38.524673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.151 #27 NEW cov: 12467 ft: 14376 corp: 10/381b lim: 120 exec/s: 0 rss: 73Mb L: 34/74 MS: 1 InsertByte- 00:07:31.151 [2024-12-12 06:45:38.574784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.151 [2024-12-12 06:45:38.574810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.151 #28 NEW cov: 12467 ft: 14452 corp: 11/413b lim: 120 exec/s: 0 rss: 73Mb L: 32/74 MS: 1 EraseBytes- 00:07:31.151 [2024-12-12 06:45:38.625110] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35718 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.151 [2024-12-12 06:45:38.625140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.151 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:31.151 #29 NEW cov: 12484 ft: 14476 corp: 12/451b lim: 120 exec/s: 0 rss: 73Mb L: 38/74 MS: 1 CMP- DE: "\000\002\000\000"- 00:07:31.411 [2024-12-12 06:45:38.695220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:140 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.411 [2024-12-12 06:45:38.695257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.411 #35 NEW cov: 12484 ft: 14492 corp: 13/484b lim: 120 exec/s: 0 rss: 73Mb L: 33/74 MS: 1 ChangeBinInt- 00:07:31.411 [2024-12-12 06:45:38.765473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10025447676961328011 len:35718 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.411 [2024-12-12 06:45:38.765506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.411 #41 NEW cov: 12484 ft: 14519 corp: 14/518b lim: 120 exec/s: 41 rss: 73Mb L: 34/74 MS: 1 ChangeByte- 00:07:31.411 [2024-12-12 06:45:38.815576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35814 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.411 [2024-12-12 06:45:38.815607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.411 #42 NEW cov: 12484 ft: 14542 corp: 15/552b lim: 120 exec/s: 42 rss: 73Mb L: 34/74 MS: 1 InsertByte- 00:07:31.411 [2024-12-12 06:45:38.885882] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35718 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.411 [2024-12-12 06:45:38.885915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.411 #43 NEW cov: 12484 ft: 14623 corp: 16/590b lim: 120 exec/s: 43 rss: 73Mb L: 38/74 MS: 1 PersAutoDict- DE: "\000\002\000\000"- 00:07:31.670 [2024-12-12 06:45:38.936546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.670 [2024-12-12 06:45:38.936582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.670 [2024-12-12 06:45:38.936707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.670 [2024-12-12 06:45:38.936732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.670 [2024-12-12 06:45:38.936861] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.670 [2024-12-12 06:45:38.936888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.670 #44 NEW cov: 12484 ft: 14646 corp: 17/664b lim: 120 exec/s: 44 rss: 73Mb L: 74/74 MS: 1 ChangeBit- 00:07:31.670 [2024-12-12 06:45:39.006241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.670 [2024-12-12 06:45:39.006275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.670 #45 NEW cov: 12484 ft: 14671 corp: 18/701b lim: 120 exec/s: 45 rss: 73Mb L: 37/74 MS: 1 CrossOver- 00:07:31.670 [2024-12-12 06:45:39.076377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.670 [2024-12-12 06:45:39.076409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.670 #46 NEW cov: 12484 ft: 14753 corp: 19/734b lim: 120 exec/s: 46 rss: 73Mb L: 33/74 MS: 1 ShuffleBytes- 00:07:31.670 [2024-12-12 06:45:39.127028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:8246779703961815922 len:29299 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.670 [2024-12-12 06:45:39.127062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.670 [2024-12-12 06:45:39.127168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:8246779703540740722 len:29299 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.670 [2024-12-12 06:45:39.127190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.670 [2024-12-12 06:45:39.127318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:8246779703540740722 len:29299 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.670 [2024-12-12 06:45:39.127342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.670 #47 NEW cov: 12484 ft: 14769 corp: 20/825b lim: 120 exec/s: 47 rss: 73Mb L: 91/91 MS: 1 InsertRepeatedBytes- 00:07:31.670 [2024-12-12 06:45:39.177154] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:8753160912107471481 len:31098 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.670 [2024-12-12 06:45:39.177188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.671 [2024-12-12 06:45:39.177286] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31098 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.671 [2024-12-12 06:45:39.177308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.671 [2024-12-12 06:45:39.177428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:8753160913407277433 len:31098 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.671 [2024-12-12 06:45:39.177455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.930 #50 NEW cov: 12484 ft: 14795 corp: 21/915b lim: 120 exec/s: 50 rss: 73Mb L: 90/91 MS: 3 PersAutoDict-ChangeByte-InsertRepeatedBytes- DE: "\000\002\000\000"- 00:07:31.930 [2024-12-12 06:45:39.227430] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.930 [2024-12-12 06:45:39.227463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.930 [2024-12-12 06:45:39.227575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3038287083105561130 len:299 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.930 [2024-12-12 06:45:39.227600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.930 [2024-12-12 06:45:39.227725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.930 [2024-12-12 06:45:39.227753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.930 #51 NEW cov: 12484 ft: 14938 corp: 22/989b lim: 120 exec/s: 51 rss: 73Mb L: 74/91 MS: 1 CMP- DE: "\001\000\000\001"- 00:07:31.930 [2024-12-12 06:45:39.297008] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.930 [2024-12-12 06:45:39.297039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.930 #52 NEW cov: 12484 ft: 14952 corp: 23/1022b lim: 120 exec/s: 52 rss: 73Mb L: 33/91 MS: 1 ShuffleBytes- 00:07:31.930 [2024-12-12 06:45:39.368068] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.930 [2024-12-12 06:45:39.368099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.930 [2024-12-12 06:45:39.368185] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.930 [2024-12-12 06:45:39.368210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.930 [2024-12-12 06:45:39.368328] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.930 [2024-12-12 06:45:39.368355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.930 [2024-12-12 06:45:39.368478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.930 [2024-12-12 06:45:39.368500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.930 #53 NEW cov: 12484 ft: 15327 corp: 24/1131b lim: 120 exec/s: 53 rss: 73Mb L: 109/109 MS: 1 InsertRepeatedBytes- 00:07:31.930 [2024-12-12 06:45:39.437459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.930 [2024-12-12 06:45:39.437490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.189 #54 NEW cov: 12484 ft: 15349 corp: 25/1164b lim: 120 exec/s: 54 rss: 73Mb L: 33/109 MS: 1 ShuffleBytes- 00:07:32.189 [2024-12-12 06:45:39.487662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.189 [2024-12-12 06:45:39.487694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.189 #55 NEW cov: 12484 ft: 15400 corp: 26/1197b lim: 120 exec/s: 55 rss: 74Mb L: 33/109 MS: 1 ShuffleBytes- 00:07:32.189 [2024-12-12 06:45:39.557823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:140 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.189 [2024-12-12 06:45:39.557854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.189 #56 NEW cov: 12484 ft: 15432 corp: 27/1230b lim: 120 exec/s: 56 rss: 74Mb L: 33/109 MS: 1 CrossOver- 00:07:32.189 [2024-12-12 06:45:39.628014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14178673873461232836 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.189 [2024-12-12 06:45:39.628042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.189 #60 NEW cov: 12491 ft: 15512 corp: 28/1255b lim: 120 exec/s: 60 rss: 74Mb L: 25/109 MS: 4 CrossOver-ChangeByte-ChangeBit-InsertRepeatedBytes- 00:07:32.189 [2024-12-12 06:45:39.678799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:35724 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.189 [2024-12-12 06:45:39.678832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.189 [2024-12-12 06:45:39.678941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.189 [2024-12-12 06:45:39.678968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.190 [2024-12-12 06:45:39.679098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.190 [2024-12-12 06:45:39.679124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.190 #61 NEW cov: 12491 ft: 15529 corp: 29/1329b lim: 120 exec/s: 61 rss: 74Mb L: 74/109 MS: 1 ShuffleBytes- 00:07:32.449 [2024-12-12 06:45:39.728404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10055284024492657547 len:58764 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.449 [2024-12-12 06:45:39.728430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.449 #62 NEW cov: 12491 ft: 15536 corp: 30/1362b lim: 120 exec/s: 31 rss: 74Mb L: 33/109 MS: 1 CrossOver- 00:07:32.449 #62 DONE cov: 12491 ft: 15536 corp: 30/1362b lim: 120 exec/s: 31 rss: 74Mb 00:07:32.449 ###### Recommended dictionary. ###### 00:07:32.449 "\000\002\000\000" # Uses: 2 00:07:32.449 "\001\000\000\001" # Uses: 0 00:07:32.449 ###### End of recommended dictionary. ###### 00:07:32.449 Done 62 runs in 2 second(s) 00:07:32.449 06:45:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:07:32.449 06:45:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:32.449 06:45:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.449 06:45:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:32.449 06:45:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:32.449 06:45:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:32.449 06:45:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:32.449 06:45:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:32.449 06:45:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:32.449 06:45:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:32.449 06:45:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:32.449 06:45:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:07:32.449 06:45:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:07:32.449 06:45:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:32.449 06:45:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:32.449 06:45:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:32.449 06:45:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:32.449 06:45:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:32.449 06:45:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:07:32.449 [2024-12-12 06:45:39.899290] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:32.449 [2024-12-12 06:45:39.899372] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161195 ] 00:07:32.707 [2024-12-12 06:45:40.084571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.707 [2024-12-12 06:45:40.126262] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.707 [2024-12-12 06:45:40.185712] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.707 [2024-12-12 06:45:40.202019] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:32.707 INFO: Running with entropic power schedule (0xFF, 100). 00:07:32.707 INFO: Seed: 1535585432 00:07:32.967 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:32.967 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:32.967 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:32.967 INFO: A corpus is not provided, starting from an empty corpus 00:07:32.967 #2 INITED exec/s: 0 rss: 66Mb 00:07:32.967 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:32.967 This may also happen if the target rejected all inputs we tried so far 00:07:32.967 [2024-12-12 06:45:40.257624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:32.967 [2024-12-12 06:45:40.257652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.967 [2024-12-12 06:45:40.257694] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:32.967 [2024-12-12 06:45:40.257710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.967 [2024-12-12 06:45:40.257759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:32.967 [2024-12-12 06:45:40.257774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.967 [2024-12-12 06:45:40.257825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:32.967 [2024-12-12 06:45:40.257839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.967 [2024-12-12 06:45:40.257892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:32.967 [2024-12-12 06:45:40.257908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:33.226 NEW_FUNC[1/716]: 0x459398 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:33.226 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:33.226 #10 NEW cov: 12206 ft: 12204 corp: 2/101b lim: 100 exec/s: 0 rss: 73Mb L: 100/100 MS: 3 ShuffleBytes-CMP-InsertRepeatedBytes- DE: ">\000\000\000\000\000\000\000"- 00:07:33.226 [2024-12-12 06:45:40.588579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.226 [2024-12-12 06:45:40.588657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.226 [2024-12-12 06:45:40.588759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.226 [2024-12-12 06:45:40.588797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.226 [2024-12-12 06:45:40.588892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.226 [2024-12-12 06:45:40.588941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.226 #11 NEW cov: 12319 ft: 13117 corp: 3/173b lim: 100 exec/s: 0 rss: 73Mb L: 72/100 MS: 1 EraseBytes- 00:07:33.226 [2024-12-12 06:45:40.658340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.226 [2024-12-12 06:45:40.658369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.226 [2024-12-12 06:45:40.658422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.226 [2024-12-12 06:45:40.658439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.226 [2024-12-12 06:45:40.658494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.226 [2024-12-12 06:45:40.658510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.226 #12 NEW cov: 12325 ft: 13402 corp: 4/238b lim: 100 exec/s: 0 rss: 73Mb L: 65/100 MS: 1 InsertRepeatedBytes- 00:07:33.226 [2024-12-12 06:45:40.698435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.226 [2024-12-12 06:45:40.698465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.226 [2024-12-12 06:45:40.698519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.226 [2024-12-12 06:45:40.698535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.226 [2024-12-12 06:45:40.698586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.226 [2024-12-12 06:45:40.698601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.226 #20 NEW cov: 12410 ft: 13735 corp: 5/302b lim: 100 exec/s: 0 rss: 73Mb L: 64/100 MS: 3 PersAutoDict-ChangeBinInt-InsertRepeatedBytes- DE: ">\000\000\000\000\000\000\000"- 00:07:33.226 [2024-12-12 06:45:40.738522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.226 [2024-12-12 06:45:40.738549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.226 [2024-12-12 06:45:40.738606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.226 [2024-12-12 06:45:40.738622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.226 [2024-12-12 06:45:40.738674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.226 [2024-12-12 06:45:40.738689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.486 #21 NEW cov: 12410 ft: 13868 corp: 6/374b lim: 100 exec/s: 0 rss: 73Mb L: 72/100 MS: 1 ChangeBinInt- 00:07:33.486 [2024-12-12 06:45:40.798922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.486 [2024-12-12 06:45:40.798949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.486 [2024-12-12 06:45:40.799017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.486 [2024-12-12 06:45:40.799029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.486 [2024-12-12 06:45:40.799080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.486 [2024-12-12 06:45:40.799095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.486 [2024-12-12 06:45:40.799144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:33.486 [2024-12-12 06:45:40.799163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.486 [2024-12-12 06:45:40.799213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:33.486 [2024-12-12 06:45:40.799227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:33.486 #22 NEW cov: 12410 ft: 13956 corp: 7/474b lim: 100 exec/s: 0 rss: 73Mb L: 100/100 MS: 1 ChangeBit- 00:07:33.486 [2024-12-12 06:45:40.838700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.486 [2024-12-12 06:45:40.838729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.486 [2024-12-12 06:45:40.838787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.486 [2024-12-12 06:45:40.838803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.486 #28 NEW cov: 12410 ft: 14268 corp: 8/526b lim: 100 exec/s: 0 rss: 73Mb L: 52/100 MS: 1 CrossOver- 00:07:33.486 [2024-12-12 06:45:40.899180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.486 [2024-12-12 06:45:40.899206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.486 [2024-12-12 06:45:40.899255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.486 [2024-12-12 06:45:40.899269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.486 [2024-12-12 06:45:40.899320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.486 [2024-12-12 06:45:40.899334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.486 [2024-12-12 06:45:40.899382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:33.486 [2024-12-12 06:45:40.899395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.486 [2024-12-12 06:45:40.899444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:33.486 [2024-12-12 06:45:40.899458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:33.486 #29 NEW cov: 12410 ft: 14291 corp: 9/626b lim: 100 exec/s: 0 rss: 73Mb L: 100/100 MS: 1 ShuffleBytes- 00:07:33.486 [2024-12-12 06:45:40.939186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.486 [2024-12-12 06:45:40.939213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.486 [2024-12-12 06:45:40.939261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.486 [2024-12-12 06:45:40.939275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.486 [2024-12-12 06:45:40.939322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.486 [2024-12-12 06:45:40.939336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.486 [2024-12-12 06:45:40.939385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:33.486 [2024-12-12 06:45:40.939397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.486 #30 NEW cov: 12410 ft: 14332 corp: 10/715b lim: 100 exec/s: 0 rss: 74Mb L: 89/100 MS: 1 CrossOver- 00:07:33.486 [2024-12-12 06:45:40.999248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.486 [2024-12-12 06:45:40.999274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.486 [2024-12-12 06:45:40.999335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.486 [2024-12-12 06:45:40.999351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.486 [2024-12-12 06:45:40.999401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.487 [2024-12-12 06:45:40.999418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.746 #31 NEW cov: 12410 ft: 14367 corp: 11/780b lim: 100 exec/s: 0 rss: 74Mb L: 65/100 MS: 1 CMP- DE: "\001\000"- 00:07:33.746 [2024-12-12 06:45:41.059324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.746 [2024-12-12 06:45:41.059349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.746 [2024-12-12 06:45:41.059412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.746 [2024-12-12 06:45:41.059427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.746 #32 NEW cov: 12410 ft: 14380 corp: 12/838b lim: 100 exec/s: 0 rss: 74Mb L: 58/100 MS: 1 InsertRepeatedBytes- 00:07:33.746 [2024-12-12 06:45:41.099531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.746 [2024-12-12 06:45:41.099557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.746 [2024-12-12 06:45:41.099617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.746 [2024-12-12 06:45:41.099632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.746 [2024-12-12 06:45:41.099683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.746 [2024-12-12 06:45:41.099697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.746 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:33.746 #33 NEW cov: 12433 ft: 14410 corp: 13/902b lim: 100 exec/s: 0 rss: 74Mb L: 64/100 MS: 1 ChangeByte- 00:07:33.746 [2024-12-12 06:45:41.159687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.746 [2024-12-12 06:45:41.159712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.746 [2024-12-12 06:45:41.159777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.746 [2024-12-12 06:45:41.159791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.746 [2024-12-12 06:45:41.159842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.746 [2024-12-12 06:45:41.159856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.746 #34 NEW cov: 12433 ft: 14471 corp: 14/966b lim: 100 exec/s: 0 rss: 74Mb L: 64/100 MS: 1 ShuffleBytes- 00:07:33.746 [2024-12-12 06:45:41.219998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:33.746 [2024-12-12 06:45:41.220024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.746 [2024-12-12 06:45:41.220076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:33.746 [2024-12-12 06:45:41.220091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.746 [2024-12-12 06:45:41.220143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:33.746 [2024-12-12 06:45:41.220162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.746 [2024-12-12 06:45:41.220212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:33.746 [2024-12-12 06:45:41.220225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.746 #35 NEW cov: 12433 ft: 14478 corp: 15/1055b lim: 100 exec/s: 35 rss: 74Mb L: 89/100 MS: 1 ShuffleBytes- 00:07:34.005 [2024-12-12 06:45:41.279941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.005 [2024-12-12 06:45:41.279969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.006 [2024-12-12 06:45:41.280016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.006 [2024-12-12 06:45:41.280031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.006 #41 NEW cov: 12433 ft: 14504 corp: 16/1107b lim: 100 exec/s: 41 rss: 74Mb L: 52/100 MS: 1 ChangeByte- 00:07:34.006 [2024-12-12 06:45:41.320246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.006 [2024-12-12 06:45:41.320272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.006 [2024-12-12 06:45:41.320325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.006 [2024-12-12 06:45:41.320338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.006 [2024-12-12 06:45:41.320387] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.006 [2024-12-12 06:45:41.320401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.006 [2024-12-12 06:45:41.320453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.006 [2024-12-12 06:45:41.320467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.006 #42 NEW cov: 12433 ft: 14521 corp: 17/1189b lim: 100 exec/s: 42 rss: 74Mb L: 82/100 MS: 1 CopyPart- 00:07:34.006 [2024-12-12 06:45:41.360119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.006 [2024-12-12 06:45:41.360152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.006 [2024-12-12 06:45:41.360190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.006 [2024-12-12 06:45:41.360204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.006 #43 NEW cov: 12433 ft: 14571 corp: 18/1241b lim: 100 exec/s: 43 rss: 74Mb L: 52/100 MS: 1 ChangeByte- 00:07:34.006 [2024-12-12 06:45:41.400587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.006 [2024-12-12 06:45:41.400613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.006 [2024-12-12 06:45:41.400677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.006 [2024-12-12 06:45:41.400692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.006 [2024-12-12 06:45:41.400743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.006 [2024-12-12 06:45:41.400756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.006 [2024-12-12 06:45:41.400805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.006 [2024-12-12 06:45:41.400818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.006 [2024-12-12 06:45:41.400869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:34.006 [2024-12-12 06:45:41.400886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:34.006 #44 NEW cov: 12433 ft: 14596 corp: 19/1341b lim: 100 exec/s: 44 rss: 74Mb L: 100/100 MS: 1 ChangeByte- 00:07:34.006 [2024-12-12 06:45:41.440367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.006 [2024-12-12 06:45:41.440393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.006 [2024-12-12 06:45:41.440443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.006 [2024-12-12 06:45:41.440460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.006 #45 NEW cov: 12433 ft: 14623 corp: 20/1395b lim: 100 exec/s: 45 rss: 74Mb L: 54/100 MS: 1 EraseBytes- 00:07:34.006 [2024-12-12 06:45:41.480808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.006 [2024-12-12 06:45:41.480833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.006 [2024-12-12 06:45:41.480887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.006 [2024-12-12 06:45:41.480901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.006 [2024-12-12 06:45:41.480952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.006 [2024-12-12 06:45:41.480966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.006 [2024-12-12 06:45:41.481013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.006 [2024-12-12 06:45:41.481027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.006 [2024-12-12 06:45:41.481076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:34.006 [2024-12-12 06:45:41.481090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:34.006 #46 NEW cov: 12433 ft: 14634 corp: 21/1495b lim: 100 exec/s: 46 rss: 74Mb L: 100/100 MS: 1 ChangeByte- 00:07:34.006 [2024-12-12 06:45:41.520797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.006 [2024-12-12 06:45:41.520822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.006 [2024-12-12 06:45:41.520885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.006 [2024-12-12 06:45:41.520900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.006 [2024-12-12 06:45:41.520952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.006 [2024-12-12 06:45:41.520967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.006 [2024-12-12 06:45:41.521017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.006 [2024-12-12 06:45:41.521031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.266 #47 NEW cov: 12433 ft: 14644 corp: 22/1584b lim: 100 exec/s: 47 rss: 74Mb L: 89/100 MS: 1 ChangeByte- 00:07:34.266 [2024-12-12 06:45:41.560945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.266 [2024-12-12 06:45:41.560970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.266 [2024-12-12 06:45:41.561032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.266 [2024-12-12 06:45:41.561049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.266 [2024-12-12 06:45:41.561101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.266 [2024-12-12 06:45:41.561116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.266 [2024-12-12 06:45:41.561170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.266 [2024-12-12 06:45:41.561184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.266 #48 NEW cov: 12433 ft: 14657 corp: 23/1673b lim: 100 exec/s: 48 rss: 74Mb L: 89/100 MS: 1 CrossOver- 00:07:34.266 [2024-12-12 06:45:41.621231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.266 [2024-12-12 06:45:41.621257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.266 [2024-12-12 06:45:41.621325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.266 [2024-12-12 06:45:41.621340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.266 [2024-12-12 06:45:41.621391] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.266 [2024-12-12 06:45:41.621404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.266 [2024-12-12 06:45:41.621453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.266 [2024-12-12 06:45:41.621466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.266 [2024-12-12 06:45:41.621515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:34.266 [2024-12-12 06:45:41.621529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:34.266 #49 NEW cov: 12433 ft: 14682 corp: 24/1773b lim: 100 exec/s: 49 rss: 74Mb L: 100/100 MS: 1 CrossOver- 00:07:34.266 [2024-12-12 06:45:41.681155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.266 [2024-12-12 06:45:41.681181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.266 [2024-12-12 06:45:41.681247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.266 [2024-12-12 06:45:41.681262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.266 [2024-12-12 06:45:41.681315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.266 [2024-12-12 06:45:41.681329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.266 #50 NEW cov: 12433 ft: 14701 corp: 25/1843b lim: 100 exec/s: 50 rss: 74Mb L: 70/100 MS: 1 InsertRepeatedBytes- 00:07:34.266 [2024-12-12 06:45:41.741446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.266 [2024-12-12 06:45:41.741472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.266 [2024-12-12 06:45:41.741542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.266 [2024-12-12 06:45:41.741557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.266 [2024-12-12 06:45:41.741607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.266 [2024-12-12 06:45:41.741625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.266 [2024-12-12 06:45:41.741675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.266 [2024-12-12 06:45:41.741691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.266 #51 NEW cov: 12433 ft: 14711 corp: 26/1932b lim: 100 exec/s: 51 rss: 75Mb L: 89/100 MS: 1 ShuffleBytes- 00:07:34.526 [2024-12-12 06:45:41.801391] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.526 [2024-12-12 06:45:41.801417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.526 [2024-12-12 06:45:41.801469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.526 [2024-12-12 06:45:41.801483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.526 #52 NEW cov: 12433 ft: 14733 corp: 27/1990b lim: 100 exec/s: 52 rss: 75Mb L: 58/100 MS: 1 ChangeByte- 00:07:34.526 [2024-12-12 06:45:41.861692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.526 [2024-12-12 06:45:41.861720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.526 [2024-12-12 06:45:41.861778] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.526 [2024-12-12 06:45:41.861794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.526 [2024-12-12 06:45:41.861845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.526 [2024-12-12 06:45:41.861860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.526 #58 NEW cov: 12433 ft: 14736 corp: 28/2054b lim: 100 exec/s: 58 rss: 75Mb L: 64/100 MS: 1 ShuffleBytes- 00:07:34.526 [2024-12-12 06:45:41.901904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.526 [2024-12-12 06:45:41.901930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.526 [2024-12-12 06:45:41.901993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.526 [2024-12-12 06:45:41.902009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.526 [2024-12-12 06:45:41.902058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.526 [2024-12-12 06:45:41.902073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.526 [2024-12-12 06:45:41.902125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.526 [2024-12-12 06:45:41.902139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.526 #59 NEW cov: 12433 ft: 14744 corp: 29/2137b lim: 100 exec/s: 59 rss: 75Mb L: 83/100 MS: 1 InsertByte- 00:07:34.526 [2024-12-12 06:45:41.962125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.526 [2024-12-12 06:45:41.962156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.526 [2024-12-12 06:45:41.962204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.526 [2024-12-12 06:45:41.962218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.526 [2024-12-12 06:45:41.962272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.526 [2024-12-12 06:45:41.962286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.526 [2024-12-12 06:45:41.962337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.526 [2024-12-12 06:45:41.962351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.526 #60 NEW cov: 12433 ft: 14753 corp: 30/2221b lim: 100 exec/s: 60 rss: 75Mb L: 84/100 MS: 1 CMP- DE: "\377\005"- 00:07:34.526 [2024-12-12 06:45:42.002051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.526 [2024-12-12 06:45:42.002079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.526 [2024-12-12 06:45:42.002122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.526 [2024-12-12 06:45:42.002137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.526 [2024-12-12 06:45:42.002192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.526 [2024-12-12 06:45:42.002205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.526 #61 NEW cov: 12433 ft: 14763 corp: 31/2299b lim: 100 exec/s: 61 rss: 75Mb L: 78/100 MS: 1 InsertRepeatedBytes- 00:07:34.786 [2024-12-12 06:45:42.062360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.786 [2024-12-12 06:45:42.062386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.786 [2024-12-12 06:45:42.062429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.786 [2024-12-12 06:45:42.062444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.786 [2024-12-12 06:45:42.062494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.786 [2024-12-12 06:45:42.062509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.786 [2024-12-12 06:45:42.062558] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.786 [2024-12-12 06:45:42.062573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.786 #62 NEW cov: 12433 ft: 14764 corp: 32/2388b lim: 100 exec/s: 62 rss: 75Mb L: 89/100 MS: 1 CopyPart- 00:07:34.786 [2024-12-12 06:45:42.102460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.786 [2024-12-12 06:45:42.102486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.786 [2024-12-12 06:45:42.102537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.786 [2024-12-12 06:45:42.102552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.786 [2024-12-12 06:45:42.102601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.786 [2024-12-12 06:45:42.102615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.786 [2024-12-12 06:45:42.102666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.786 [2024-12-12 06:45:42.102681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.786 #63 NEW cov: 12433 ft: 14783 corp: 33/2477b lim: 100 exec/s: 63 rss: 75Mb L: 89/100 MS: 1 ChangeByte- 00:07:34.786 [2024-12-12 06:45:42.162624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.786 [2024-12-12 06:45:42.162649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.786 [2024-12-12 06:45:42.162700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.786 [2024-12-12 06:45:42.162712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.786 [2024-12-12 06:45:42.162761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.786 [2024-12-12 06:45:42.162792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.786 [2024-12-12 06:45:42.162843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.786 [2024-12-12 06:45:42.162858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.786 #64 NEW cov: 12433 ft: 14798 corp: 34/2567b lim: 100 exec/s: 64 rss: 75Mb L: 90/100 MS: 1 InsertByte- 00:07:34.786 [2024-12-12 06:45:42.202714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.786 [2024-12-12 06:45:42.202740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.786 [2024-12-12 06:45:42.202803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.786 [2024-12-12 06:45:42.202818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.786 [2024-12-12 06:45:42.202870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.786 [2024-12-12 06:45:42.202884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.786 [2024-12-12 06:45:42.202936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.786 [2024-12-12 06:45:42.202952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.786 #65 NEW cov: 12433 ft: 14805 corp: 35/2656b lim: 100 exec/s: 65 rss: 75Mb L: 89/100 MS: 1 CrossOver- 00:07:34.786 [2024-12-12 06:45:42.242829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:34.786 [2024-12-12 06:45:42.242856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.786 [2024-12-12 06:45:42.242907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:34.786 [2024-12-12 06:45:42.242920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.786 [2024-12-12 06:45:42.242970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:34.786 [2024-12-12 06:45:42.242985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.786 [2024-12-12 06:45:42.243034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:34.786 [2024-12-12 06:45:42.243047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.786 #66 NEW cov: 12433 ft: 14823 corp: 36/2745b lim: 100 exec/s: 33 rss: 75Mb L: 89/100 MS: 1 InsertRepeatedBytes- 00:07:34.786 #66 DONE cov: 12433 ft: 14823 corp: 36/2745b lim: 100 exec/s: 33 rss: 75Mb 00:07:34.786 ###### Recommended dictionary. ###### 00:07:34.786 ">\000\000\000\000\000\000\000" # Uses: 2 00:07:34.786 "\001\000" # Uses: 1 00:07:34.786 "\377\005" # Uses: 0 00:07:34.786 ###### End of recommended dictionary. ###### 00:07:34.786 Done 66 runs in 2 second(s) 00:07:35.046 06:45:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:07:35.046 06:45:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:35.046 06:45:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:35.046 06:45:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:35.046 06:45:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:35.046 06:45:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:35.046 06:45:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:35.046 06:45:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:35.046 06:45:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:35.046 06:45:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:35.046 06:45:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:35.046 06:45:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:07:35.046 06:45:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:07:35.046 06:45:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:35.046 06:45:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:35.046 06:45:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:35.046 06:45:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:35.046 06:45:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:35.046 06:45:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:07:35.046 [2024-12-12 06:45:42.417988] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:35.046 [2024-12-12 06:45:42.418078] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161687 ] 00:07:35.305 [2024-12-12 06:45:42.611167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.305 [2024-12-12 06:45:42.644304] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.305 [2024-12-12 06:45:42.703052] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.305 [2024-12-12 06:45:42.719369] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:35.305 INFO: Running with entropic power schedule (0xFF, 100). 00:07:35.305 INFO: Seed: 4050555547 00:07:35.305 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:35.305 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:35.305 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:35.305 INFO: A corpus is not provided, starting from an empty corpus 00:07:35.305 #2 INITED exec/s: 0 rss: 64Mb 00:07:35.306 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:35.306 This may also happen if the target rejected all inputs we tried so far 00:07:35.306 [2024-12-12 06:45:42.767558] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1617494016 len:76 00:07:35.306 [2024-12-12 06:45:42.767588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.565 NEW_FUNC[1/716]: 0x45c358 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:35.565 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:35.565 #6 NEW cov: 12184 ft: 12173 corp: 2/11b lim: 50 exec/s: 0 rss: 71Mb L: 10/10 MS: 4 ChangeByte-CMP-CopyPart-InsertByte- DE: "i\000\000\000\000\000\000\000"- 00:07:35.824 [2024-12-12 06:45:43.088382] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12884901888 len:2571 00:07:35.824 [2024-12-12 06:45:43.088417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.824 #23 NEW cov: 12297 ft: 12841 corp: 3/21b lim: 50 exec/s: 0 rss: 71Mb L: 10/10 MS: 2 CrossOver-CMP- DE: "\000\000\000\000\000\000\000\003"- 00:07:35.824 [2024-12-12 06:45:43.128411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1617506560 len:76 00:07:35.824 [2024-12-12 06:45:43.128440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.824 #24 NEW cov: 12303 ft: 13116 corp: 4/31b lim: 50 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 ChangeByte- 00:07:35.824 [2024-12-12 06:45:43.188636] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1761607680 len:76 00:07:35.824 [2024-12-12 06:45:43.188663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.824 #25 NEW cov: 12388 ft: 13340 corp: 5/41b lim: 50 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 PersAutoDict- DE: "i\000\000\000\000\000\000\000"- 00:07:35.824 [2024-12-12 06:45:43.228699] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1617494016 len:1 00:07:35.824 [2024-12-12 06:45:43.228726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.824 #26 NEW cov: 12388 ft: 13477 corp: 6/52b lim: 50 exec/s: 0 rss: 71Mb L: 11/11 MS: 1 CopyPart- 00:07:35.824 [2024-12-12 06:45:43.268817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071032078335 len:76 00:07:35.824 [2024-12-12 06:45:43.268844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.824 #27 NEW cov: 12388 ft: 13537 corp: 7/62b lim: 50 exec/s: 0 rss: 71Mb L: 10/11 MS: 1 ChangeBinInt- 00:07:35.824 [2024-12-12 06:45:43.308906] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071032078335 len:12545 00:07:35.824 [2024-12-12 06:45:43.308933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.084 #28 NEW cov: 12388 ft: 13617 corp: 8/73b lim: 50 exec/s: 0 rss: 71Mb L: 11/11 MS: 1 InsertByte- 00:07:36.084 [2024-12-12 06:45:43.369095] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071041935615 len:12545 00:07:36.084 [2024-12-12 06:45:43.369123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.084 #29 NEW cov: 12388 ft: 13667 corp: 9/84b lim: 50 exec/s: 0 rss: 71Mb L: 11/11 MS: 1 ShuffleBytes- 00:07:36.084 [2024-12-12 06:45:43.429255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1617502464 len:76 00:07:36.084 [2024-12-12 06:45:43.429282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.084 #30 NEW cov: 12388 ft: 13724 corp: 10/94b lim: 50 exec/s: 0 rss: 71Mb L: 10/11 MS: 1 ChangeByte- 00:07:36.084 [2024-12-12 06:45:43.489913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1761607680 len:1 00:07:36.084 [2024-12-12 06:45:43.489941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.084 [2024-12-12 06:45:43.489988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:36.084 [2024-12-12 06:45:43.490004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.084 [2024-12-12 06:45:43.490060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:07:36.084 [2024-12-12 06:45:43.490077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.084 [2024-12-12 06:45:43.490134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:07:36.084 [2024-12-12 06:45:43.490154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.084 [2024-12-12 06:45:43.490211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:0 len:76 00:07:36.084 [2024-12-12 06:45:43.490227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:36.084 #31 NEW cov: 12388 ft: 14190 corp: 11/144b lim: 50 exec/s: 0 rss: 71Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:07:36.084 [2024-12-12 06:45:43.549597] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071041935615 len:12549 00:07:36.084 [2024-12-12 06:45:43.549626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.084 #32 NEW cov: 12388 ft: 14233 corp: 12/155b lim: 50 exec/s: 0 rss: 71Mb L: 11/50 MS: 1 ChangeBit- 00:07:36.343 [2024-12-12 06:45:43.609790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071041939711 len:12545 00:07:36.344 [2024-12-12 06:45:43.609819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.344 #33 NEW cov: 12388 ft: 14261 corp: 13/166b lim: 50 exec/s: 0 rss: 71Mb L: 11/50 MS: 1 ChangeBit- 00:07:36.344 [2024-12-12 06:45:43.649893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:69805795985850368 len:76 00:07:36.344 [2024-12-12 06:45:43.649920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.344 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:36.344 #34 NEW cov: 12411 ft: 14318 corp: 14/176b lim: 50 exec/s: 0 rss: 72Mb L: 10/50 MS: 1 ChangeBinInt- 00:07:36.344 [2024-12-12 06:45:43.689996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1617506560 len:76 00:07:36.344 [2024-12-12 06:45:43.690024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.344 #35 NEW cov: 12411 ft: 14343 corp: 15/189b lim: 50 exec/s: 0 rss: 72Mb L: 13/50 MS: 1 CrossOver- 00:07:36.344 [2024-12-12 06:45:43.730230] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:106005705335040 len:12545 00:07:36.344 [2024-12-12 06:45:43.730259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.344 [2024-12-12 06:45:43.730294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:322122547200 len:76 00:07:36.344 [2024-12-12 06:45:43.730310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.344 #36 NEW cov: 12411 ft: 14613 corp: 16/209b lim: 50 exec/s: 36 rss: 72Mb L: 20/50 MS: 1 CrossOver- 00:07:36.344 [2024-12-12 06:45:43.790263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1761607680 len:1 00:07:36.344 [2024-12-12 06:45:43.790295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.344 #37 NEW cov: 12411 ft: 14642 corp: 17/227b lim: 50 exec/s: 37 rss: 72Mb L: 18/50 MS: 1 InsertRepeatedBytes- 00:07:36.344 [2024-12-12 06:45:43.850427] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1617494121 len:1 00:07:36.344 [2024-12-12 06:45:43.850456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.603 #38 NEW cov: 12411 ft: 14685 corp: 18/238b lim: 50 exec/s: 38 rss: 72Mb L: 11/50 MS: 1 PersAutoDict- DE: "i\000\000\000\000\000\000\000"- 00:07:36.603 [2024-12-12 06:45:43.910737] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071032078335 len:97 00:07:36.603 [2024-12-12 06:45:43.910765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.603 [2024-12-12 06:45:43.910817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:82465133717760 len:1 00:07:36.603 [2024-12-12 06:45:43.910832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.603 #39 NEW cov: 12411 ft: 14700 corp: 19/259b lim: 50 exec/s: 39 rss: 72Mb L: 21/50 MS: 1 CrossOver- 00:07:36.603 [2024-12-12 06:45:43.950728] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:21216624656777291 len:65377 00:07:36.603 [2024-12-12 06:45:43.950755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.603 #41 NEW cov: 12411 ft: 14717 corp: 20/271b lim: 50 exec/s: 41 rss: 72Mb L: 12/50 MS: 2 EraseBytes-CopyPart- 00:07:36.603 [2024-12-12 06:45:44.010889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12889096192 len:2571 00:07:36.603 [2024-12-12 06:45:44.010917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.603 #42 NEW cov: 12411 ft: 14758 corp: 21/281b lim: 50 exec/s: 42 rss: 72Mb L: 10/50 MS: 1 ChangeBit- 00:07:36.603 [2024-12-12 06:45:44.071189] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:36.603 [2024-12-12 06:45:44.071216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.603 [2024-12-12 06:45:44.071273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:36.603 [2024-12-12 06:45:44.071289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.603 #43 NEW cov: 12411 ft: 14769 corp: 22/307b lim: 50 exec/s: 43 rss: 72Mb L: 26/50 MS: 1 InsertRepeatedBytes- 00:07:36.603 [2024-12-12 06:45:44.111290] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:07:36.603 [2024-12-12 06:45:44.111316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.603 [2024-12-12 06:45:44.111385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:36.603 [2024-12-12 06:45:44.111402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.863 #44 NEW cov: 12411 ft: 14820 corp: 23/333b lim: 50 exec/s: 44 rss: 72Mb L: 26/50 MS: 1 ShuffleBytes- 00:07:36.863 [2024-12-12 06:45:44.171322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071041939711 len:12545 00:07:36.863 [2024-12-12 06:45:44.171350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.863 #45 NEW cov: 12411 ft: 14841 corp: 24/352b lim: 50 exec/s: 45 rss: 72Mb L: 19/50 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\003"- 00:07:36.863 [2024-12-12 06:45:44.231484] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:138538465099776 len:779 00:07:36.863 [2024-12-12 06:45:44.231512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.863 #46 NEW cov: 12411 ft: 14858 corp: 25/363b lim: 50 exec/s: 46 rss: 72Mb L: 11/50 MS: 1 InsertByte- 00:07:36.863 [2024-12-12 06:45:44.271566] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:70022399776522240 len:1 00:07:36.863 [2024-12-12 06:45:44.271593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.863 #47 NEW cov: 12411 ft: 14888 corp: 26/374b lim: 50 exec/s: 47 rss: 72Mb L: 11/50 MS: 1 InsertByte- 00:07:36.863 [2024-12-12 06:45:44.311704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1617494016 len:1 00:07:36.863 [2024-12-12 06:45:44.311731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.863 #48 NEW cov: 12411 ft: 14938 corp: 27/385b lim: 50 exec/s: 48 rss: 72Mb L: 11/50 MS: 1 CopyPart- 00:07:36.863 [2024-12-12 06:45:44.351856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18378627131109343232 len:1 00:07:36.863 [2024-12-12 06:45:44.351883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.863 #50 NEW cov: 12411 ft: 14940 corp: 28/396b lim: 50 exec/s: 50 rss: 72Mb L: 11/50 MS: 2 EraseBytes-CMP- DE: "\377\016"- 00:07:37.122 [2024-12-12 06:45:44.391967] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071041939711 len:12545 00:07:37.122 [2024-12-12 06:45:44.391994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.122 #51 NEW cov: 12411 ft: 14978 corp: 29/407b lim: 50 exec/s: 51 rss: 72Mb L: 11/50 MS: 1 ShuffleBytes- 00:07:37.122 [2024-12-12 06:45:44.432032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1617493865 len:1 00:07:37.122 [2024-12-12 06:45:44.432060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.122 #52 NEW cov: 12411 ft: 14988 corp: 30/425b lim: 50 exec/s: 52 rss: 72Mb L: 18/50 MS: 1 PersAutoDict- DE: "i\000\000\000\000\000\000\000"- 00:07:37.122 [2024-12-12 06:45:44.472174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2648725128808809 len:1 00:07:37.122 [2024-12-12 06:45:44.472201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.122 #55 NEW cov: 12411 ft: 15010 corp: 31/440b lim: 50 exec/s: 55 rss: 73Mb L: 15/50 MS: 3 EraseBytes-ChangeBinInt-CopyPart- 00:07:37.122 [2024-12-12 06:45:44.532362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:77268223262720 len:17991 00:07:37.122 [2024-12-12 06:45:44.532389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.122 #56 NEW cov: 12411 ft: 15013 corp: 32/454b lim: 50 exec/s: 56 rss: 73Mb L: 14/50 MS: 1 InsertRepeatedBytes- 00:07:37.122 [2024-12-12 06:45:44.572457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:272680645331200 len:1 00:07:37.122 [2024-12-12 06:45:44.572483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.122 #57 NEW cov: 12411 ft: 15095 corp: 33/465b lim: 50 exec/s: 57 rss: 73Mb L: 11/50 MS: 1 InsertByte- 00:07:37.122 [2024-12-12 06:45:44.612581] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1617506304 len:76 00:07:37.122 [2024-12-12 06:45:44.612612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.122 #58 NEW cov: 12411 ft: 15107 corp: 34/475b lim: 50 exec/s: 58 rss: 73Mb L: 10/50 MS: 1 ChangeASCIIInt- 00:07:37.382 [2024-12-12 06:45:44.652768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:106005705335040 len:24682 00:07:37.382 [2024-12-12 06:45:44.652796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.382 [2024-12-12 06:45:44.652837] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:322944630784 len:76 00:07:37.382 [2024-12-12 06:45:44.652853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.382 #59 NEW cov: 12411 ft: 15109 corp: 35/495b lim: 50 exec/s: 59 rss: 73Mb L: 20/50 MS: 1 CopyPart- 00:07:37.382 [2024-12-12 06:45:44.712827] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:29554872558813184 len:1 00:07:37.382 [2024-12-12 06:45:44.712854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.382 #60 NEW cov: 12411 ft: 15110 corp: 36/513b lim: 50 exec/s: 30 rss: 73Mb L: 18/50 MS: 1 PersAutoDict- DE: "i\000\000\000\000\000\000\000"- 00:07:37.382 #60 DONE cov: 12411 ft: 15110 corp: 36/513b lim: 50 exec/s: 30 rss: 73Mb 00:07:37.382 ###### Recommended dictionary. ###### 00:07:37.382 "i\000\000\000\000\000\000\000" # Uses: 4 00:07:37.382 "\000\000\000\000\000\000\000\003" # Uses: 1 00:07:37.382 "\377\016" # Uses: 0 00:07:37.382 ###### End of recommended dictionary. ###### 00:07:37.382 Done 60 runs in 2 second(s) 00:07:37.382 06:45:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:37.382 06:45:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:37.382 06:45:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:37.382 06:45:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:37.382 06:45:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:37.382 06:45:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:37.382 06:45:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:37.382 06:45:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:37.382 06:45:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:37.382 06:45:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:37.382 06:45:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:37.382 06:45:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:37.382 06:45:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:37.382 06:45:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:37.382 06:45:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:37.382 06:45:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:37.382 06:45:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:37.382 06:45:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:37.382 06:45:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:37.382 [2024-12-12 06:45:44.903481] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:37.382 [2024-12-12 06:45:44.903559] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162021 ] 00:07:37.641 [2024-12-12 06:45:45.093762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.641 [2024-12-12 06:45:45.127804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.900 [2024-12-12 06:45:45.187095] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.900 [2024-12-12 06:45:45.203427] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:37.900 INFO: Running with entropic power schedule (0xFF, 100). 00:07:37.900 INFO: Seed: 2239593602 00:07:37.900 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:37.900 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:37.900 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:37.900 INFO: A corpus is not provided, starting from an empty corpus 00:07:37.900 #2 INITED exec/s: 0 rss: 65Mb 00:07:37.900 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:37.900 This may also happen if the target rejected all inputs we tried so far 00:07:37.900 [2024-12-12 06:45:45.272246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:37.900 [2024-12-12 06:45:45.272284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.159 NEW_FUNC[1/718]: 0x45df18 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:38.159 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:38.159 #6 NEW cov: 12242 ft: 12210 corp: 2/20b lim: 90 exec/s: 0 rss: 72Mb L: 19/19 MS: 4 InsertRepeatedBytes-CrossOver-ChangeBit-InsertRepeatedBytes- 00:07:38.159 [2024-12-12 06:45:45.623134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.159 [2024-12-12 06:45:45.623194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.159 #12 NEW cov: 12355 ft: 12736 corp: 3/39b lim: 90 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 ChangeByte- 00:07:38.418 [2024-12-12 06:45:45.693385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.418 [2024-12-12 06:45:45.693422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.418 #13 NEW cov: 12361 ft: 12978 corp: 4/58b lim: 90 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 CMP- DE: "\000\000\000\000"- 00:07:38.418 [2024-12-12 06:45:45.733394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.418 [2024-12-12 06:45:45.733427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.418 #17 NEW cov: 12446 ft: 13182 corp: 5/77b lim: 90 exec/s: 0 rss: 72Mb L: 19/19 MS: 4 EraseBytes-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:07:38.418 [2024-12-12 06:45:45.803668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.418 [2024-12-12 06:45:45.803704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.418 #18 NEW cov: 12446 ft: 13243 corp: 6/97b lim: 90 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 InsertByte- 00:07:38.418 [2024-12-12 06:45:45.844035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.419 [2024-12-12 06:45:45.844062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.419 [2024-12-12 06:45:45.844198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:38.419 [2024-12-12 06:45:45.844224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.419 #19 NEW cov: 12446 ft: 14129 corp: 7/138b lim: 90 exec/s: 0 rss: 72Mb L: 41/41 MS: 1 InsertRepeatedBytes- 00:07:38.419 [2024-12-12 06:45:45.893877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.419 [2024-12-12 06:45:45.893909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.419 #20 NEW cov: 12446 ft: 14265 corp: 8/165b lim: 90 exec/s: 0 rss: 72Mb L: 27/41 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\006"- 00:07:38.678 [2024-12-12 06:45:45.953997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.678 [2024-12-12 06:45:45.954033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.678 #26 NEW cov: 12446 ft: 14369 corp: 9/184b lim: 90 exec/s: 0 rss: 72Mb L: 19/41 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:07:38.678 [2024-12-12 06:45:46.004174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.678 [2024-12-12 06:45:46.004207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.678 #31 NEW cov: 12446 ft: 14391 corp: 10/212b lim: 90 exec/s: 0 rss: 72Mb L: 28/41 MS: 5 ChangeByte-ChangeBit-CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:07:38.678 [2024-12-12 06:45:46.045019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.678 [2024-12-12 06:45:46.045049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.678 [2024-12-12 06:45:46.045151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:38.678 [2024-12-12 06:45:46.045176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.678 [2024-12-12 06:45:46.045300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:38.678 [2024-12-12 06:45:46.045324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.678 [2024-12-12 06:45:46.045450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:38.678 [2024-12-12 06:45:46.045473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.678 #32 NEW cov: 12446 ft: 14886 corp: 11/284b lim: 90 exec/s: 0 rss: 72Mb L: 72/72 MS: 1 InsertRepeatedBytes- 00:07:38.678 [2024-12-12 06:45:46.114483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.678 [2024-12-12 06:45:46.114508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.678 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:38.678 #33 NEW cov: 12469 ft: 14923 corp: 12/312b lim: 90 exec/s: 0 rss: 73Mb L: 28/72 MS: 1 ShuffleBytes- 00:07:38.678 [2024-12-12 06:45:46.185028] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.678 [2024-12-12 06:45:46.185053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.678 [2024-12-12 06:45:46.185170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:38.678 [2024-12-12 06:45:46.185209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.937 #34 NEW cov: 12469 ft: 14947 corp: 13/361b lim: 90 exec/s: 0 rss: 73Mb L: 49/72 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\006"- 00:07:38.937 [2024-12-12 06:45:46.244880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.937 [2024-12-12 06:45:46.244913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.937 #35 NEW cov: 12469 ft: 14971 corp: 14/381b lim: 90 exec/s: 35 rss: 73Mb L: 20/72 MS: 1 InsertByte- 00:07:38.937 [2024-12-12 06:45:46.305297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.937 [2024-12-12 06:45:46.305330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.937 [2024-12-12 06:45:46.305439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:38.937 [2024-12-12 06:45:46.305463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.937 #36 NEW cov: 12469 ft: 14984 corp: 15/417b lim: 90 exec/s: 36 rss: 73Mb L: 36/72 MS: 1 CopyPart- 00:07:38.937 [2024-12-12 06:45:46.345888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.937 [2024-12-12 06:45:46.345919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.937 [2024-12-12 06:45:46.346038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:38.937 [2024-12-12 06:45:46.346061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.937 [2024-12-12 06:45:46.346194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:38.937 [2024-12-12 06:45:46.346218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.937 [2024-12-12 06:45:46.346339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:38.937 [2024-12-12 06:45:46.346365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.937 #37 NEW cov: 12469 ft: 15054 corp: 16/492b lim: 90 exec/s: 37 rss: 73Mb L: 75/75 MS: 1 InsertRepeatedBytes- 00:07:38.937 [2024-12-12 06:45:46.405259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.937 [2024-12-12 06:45:46.405291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.937 #38 NEW cov: 12469 ft: 15067 corp: 17/511b lim: 90 exec/s: 38 rss: 73Mb L: 19/75 MS: 1 ChangeByte- 00:07:38.937 [2024-12-12 06:45:46.446169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:38.937 [2024-12-12 06:45:46.446200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.937 [2024-12-12 06:45:46.446279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:38.937 [2024-12-12 06:45:46.446307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.937 [2024-12-12 06:45:46.446440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:38.937 [2024-12-12 06:45:46.446466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.937 [2024-12-12 06:45:46.446588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:38.937 [2024-12-12 06:45:46.446613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.196 #39 NEW cov: 12469 ft: 15072 corp: 18/590b lim: 90 exec/s: 39 rss: 73Mb L: 79/79 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:07:39.197 [2024-12-12 06:45:46.505586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.197 [2024-12-12 06:45:46.505612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.197 #40 NEW cov: 12469 ft: 15182 corp: 19/609b lim: 90 exec/s: 40 rss: 73Mb L: 19/79 MS: 1 ChangeASCIIInt- 00:07:39.197 [2024-12-12 06:45:46.555684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.197 [2024-12-12 06:45:46.555717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.197 #41 NEW cov: 12469 ft: 15213 corp: 20/643b lim: 90 exec/s: 41 rss: 73Mb L: 34/79 MS: 1 CrossOver- 00:07:39.197 [2024-12-12 06:45:46.595847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.197 [2024-12-12 06:45:46.595878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.197 #42 NEW cov: 12469 ft: 15234 corp: 21/662b lim: 90 exec/s: 42 rss: 73Mb L: 19/79 MS: 1 ChangeByte- 00:07:39.197 [2024-12-12 06:45:46.636253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.197 [2024-12-12 06:45:46.636287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.197 [2024-12-12 06:45:46.636393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.197 [2024-12-12 06:45:46.636416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.197 #43 NEW cov: 12469 ft: 15297 corp: 22/703b lim: 90 exec/s: 43 rss: 73Mb L: 41/79 MS: 1 ShuffleBytes- 00:07:39.197 [2024-12-12 06:45:46.686157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.197 [2024-12-12 06:45:46.686189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.456 #44 NEW cov: 12469 ft: 15340 corp: 23/731b lim: 90 exec/s: 44 rss: 73Mb L: 28/79 MS: 1 EraseBytes- 00:07:39.456 [2024-12-12 06:45:46.756346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.456 [2024-12-12 06:45:46.756379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.456 #45 NEW cov: 12469 ft: 15363 corp: 24/750b lim: 90 exec/s: 45 rss: 73Mb L: 19/79 MS: 1 ChangeBit- 00:07:39.456 [2024-12-12 06:45:46.826898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.456 [2024-12-12 06:45:46.826930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.456 [2024-12-12 06:45:46.827043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.456 [2024-12-12 06:45:46.827070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.456 #51 NEW cov: 12469 ft: 15388 corp: 25/786b lim: 90 exec/s: 51 rss: 73Mb L: 36/79 MS: 1 ShuffleBytes- 00:07:39.456 [2024-12-12 06:45:46.896851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.456 [2024-12-12 06:45:46.896887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.456 #52 NEW cov: 12469 ft: 15406 corp: 26/814b lim: 90 exec/s: 52 rss: 73Mb L: 28/79 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:07:39.456 [2024-12-12 06:45:46.967040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.456 [2024-12-12 06:45:46.967081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.715 #53 NEW cov: 12469 ft: 15463 corp: 27/842b lim: 90 exec/s: 53 rss: 73Mb L: 28/79 MS: 1 ChangeBit- 00:07:39.715 [2024-12-12 06:45:47.017133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.715 [2024-12-12 06:45:47.017172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.715 #54 NEW cov: 12469 ft: 15481 corp: 28/862b lim: 90 exec/s: 54 rss: 74Mb L: 20/79 MS: 1 InsertByte- 00:07:39.715 [2024-12-12 06:45:47.087324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.715 [2024-12-12 06:45:47.087356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.715 #55 NEW cov: 12469 ft: 15498 corp: 29/881b lim: 90 exec/s: 55 rss: 74Mb L: 19/79 MS: 1 CopyPart- 00:07:39.715 [2024-12-12 06:45:47.127530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.715 [2024-12-12 06:45:47.127563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.715 #56 NEW cov: 12469 ft: 15518 corp: 30/900b lim: 90 exec/s: 56 rss: 74Mb L: 19/79 MS: 1 ShuffleBytes- 00:07:39.715 [2024-12-12 06:45:47.177852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.715 [2024-12-12 06:45:47.177880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.715 [2024-12-12 06:45:47.178010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:39.715 [2024-12-12 06:45:47.178031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.715 #57 NEW cov: 12469 ft: 15526 corp: 31/936b lim: 90 exec/s: 57 rss: 74Mb L: 36/79 MS: 1 ChangeBit- 00:07:39.975 [2024-12-12 06:45:47.247798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:39.975 [2024-12-12 06:45:47.247824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.975 #58 NEW cov: 12469 ft: 15562 corp: 32/955b lim: 90 exec/s: 29 rss: 74Mb L: 19/79 MS: 1 CopyPart- 00:07:39.975 #58 DONE cov: 12469 ft: 15562 corp: 32/955b lim: 90 exec/s: 29 rss: 74Mb 00:07:39.975 ###### Recommended dictionary. ###### 00:07:39.975 "\000\000\000\000" # Uses: 4 00:07:39.975 "\000\000\000\000\000\000\000\006" # Uses: 2 00:07:39.975 ###### End of recommended dictionary. ###### 00:07:39.975 Done 58 runs in 2 second(s) 00:07:39.975 06:45:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:39.975 06:45:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:39.975 06:45:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:39.975 06:45:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:39.975 06:45:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:39.975 06:45:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:39.975 06:45:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:39.975 06:45:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:39.975 06:45:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:39.975 06:45:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:39.975 06:45:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:39.975 06:45:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:39.975 06:45:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:39.975 06:45:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:39.975 06:45:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:39.975 06:45:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:39.975 06:45:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:39.975 06:45:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:39.975 06:45:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:39.975 [2024-12-12 06:45:47.424765] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:39.975 [2024-12-12 06:45:47.424831] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162548 ] 00:07:40.235 [2024-12-12 06:45:47.611272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.235 [2024-12-12 06:45:47.644750] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.235 [2024-12-12 06:45:47.703526] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.235 [2024-12-12 06:45:47.719787] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:40.235 INFO: Running with entropic power schedule (0xFF, 100). 00:07:40.235 INFO: Seed: 461623015 00:07:40.235 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:40.235 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:40.235 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:40.235 INFO: A corpus is not provided, starting from an empty corpus 00:07:40.235 #2 INITED exec/s: 0 rss: 65Mb 00:07:40.235 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:40.235 This may also happen if the target rejected all inputs we tried so far 00:07:40.494 [2024-12-12 06:45:47.768680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:40.494 [2024-12-12 06:45:47.768711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.494 [2024-12-12 06:45:47.768765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:40.494 [2024-12-12 06:45:47.768782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.753 NEW_FUNC[1/718]: 0x461148 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:40.753 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:40.753 #4 NEW cov: 12215 ft: 12209 corp: 2/22b lim: 50 exec/s: 0 rss: 72Mb L: 21/21 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:40.753 [2024-12-12 06:45:48.099615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:40.753 [2024-12-12 06:45:48.099650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.753 [2024-12-12 06:45:48.099726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:40.753 [2024-12-12 06:45:48.099743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.753 #5 NEW cov: 12330 ft: 12830 corp: 3/48b lim: 50 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:07:40.753 [2024-12-12 06:45:48.139826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:40.753 [2024-12-12 06:45:48.139855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.753 [2024-12-12 06:45:48.139896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:40.753 [2024-12-12 06:45:48.139914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.753 [2024-12-12 06:45:48.139971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:40.753 [2024-12-12 06:45:48.139988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.753 #6 NEW cov: 12336 ft: 13276 corp: 4/86b lim: 50 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 CrossOver- 00:07:40.753 [2024-12-12 06:45:48.200001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:40.753 [2024-12-12 06:45:48.200030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.753 [2024-12-12 06:45:48.200070] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:40.753 [2024-12-12 06:45:48.200087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.753 [2024-12-12 06:45:48.200146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:40.754 [2024-12-12 06:45:48.200168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.754 #7 NEW cov: 12421 ft: 13586 corp: 5/125b lim: 50 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 CrossOver- 00:07:40.754 [2024-12-12 06:45:48.260295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:40.754 [2024-12-12 06:45:48.260324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.754 [2024-12-12 06:45:48.260372] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:40.754 [2024-12-12 06:45:48.260390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.754 [2024-12-12 06:45:48.260446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:40.754 [2024-12-12 06:45:48.260463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.754 [2024-12-12 06:45:48.260522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:40.754 [2024-12-12 06:45:48.260539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.013 #8 NEW cov: 12421 ft: 14069 corp: 6/174b lim: 50 exec/s: 0 rss: 72Mb L: 49/49 MS: 1 CrossOver- 00:07:41.013 [2024-12-12 06:45:48.319977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.013 [2024-12-12 06:45:48.320004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.013 #12 NEW cov: 12421 ft: 14870 corp: 7/184b lim: 50 exec/s: 0 rss: 72Mb L: 10/49 MS: 4 ChangeByte-CopyPart-ChangeBit-CMP- DE: "\001\002!P\375\236\006<"- 00:07:41.014 [2024-12-12 06:45:48.360549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.014 [2024-12-12 06:45:48.360577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.014 [2024-12-12 06:45:48.360624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.014 [2024-12-12 06:45:48.360646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.014 [2024-12-12 06:45:48.360717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.014 [2024-12-12 06:45:48.360733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.014 [2024-12-12 06:45:48.360791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:41.014 [2024-12-12 06:45:48.360806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.014 #13 NEW cov: 12421 ft: 14978 corp: 8/231b lim: 50 exec/s: 0 rss: 72Mb L: 47/49 MS: 1 CrossOver- 00:07:41.014 [2024-12-12 06:45:48.400664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.014 [2024-12-12 06:45:48.400691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.014 [2024-12-12 06:45:48.400740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.014 [2024-12-12 06:45:48.400756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.014 [2024-12-12 06:45:48.400814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.014 [2024-12-12 06:45:48.400830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.014 [2024-12-12 06:45:48.400888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:41.014 [2024-12-12 06:45:48.400905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.014 #14 NEW cov: 12421 ft: 14999 corp: 9/271b lim: 50 exec/s: 0 rss: 72Mb L: 40/49 MS: 1 InsertByte- 00:07:41.014 [2024-12-12 06:45:48.440597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.014 [2024-12-12 06:45:48.440624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.014 [2024-12-12 06:45:48.440672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.014 [2024-12-12 06:45:48.440688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.014 [2024-12-12 06:45:48.440745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.014 [2024-12-12 06:45:48.440759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.014 #15 NEW cov: 12421 ft: 15047 corp: 10/310b lim: 50 exec/s: 0 rss: 72Mb L: 39/49 MS: 1 ChangeBit- 00:07:41.014 [2024-12-12 06:45:48.480612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.014 [2024-12-12 06:45:48.480642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.014 [2024-12-12 06:45:48.480696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.014 [2024-12-12 06:45:48.480712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.014 #16 NEW cov: 12421 ft: 15087 corp: 11/336b lim: 50 exec/s: 0 rss: 72Mb L: 26/49 MS: 1 ChangeByte- 00:07:41.014 [2024-12-12 06:45:48.521029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.014 [2024-12-12 06:45:48.521056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.014 [2024-12-12 06:45:48.521118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.014 [2024-12-12 06:45:48.521135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.014 [2024-12-12 06:45:48.521193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.014 [2024-12-12 06:45:48.521209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.014 [2024-12-12 06:45:48.521275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:41.014 [2024-12-12 06:45:48.521291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.274 #17 NEW cov: 12421 ft: 15136 corp: 12/376b lim: 50 exec/s: 0 rss: 72Mb L: 40/49 MS: 1 InsertByte- 00:07:41.274 [2024-12-12 06:45:48.560806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.274 [2024-12-12 06:45:48.560834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.274 [2024-12-12 06:45:48.560892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.274 [2024-12-12 06:45:48.560911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.274 #18 NEW cov: 12421 ft: 15176 corp: 13/399b lim: 50 exec/s: 0 rss: 72Mb L: 23/49 MS: 1 CrossOver- 00:07:41.274 [2024-12-12 06:45:48.621297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.274 [2024-12-12 06:45:48.621324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.274 [2024-12-12 06:45:48.621389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.274 [2024-12-12 06:45:48.621406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.274 [2024-12-12 06:45:48.621464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.274 [2024-12-12 06:45:48.621480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.274 [2024-12-12 06:45:48.621536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:41.274 [2024-12-12 06:45:48.621553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.274 #19 NEW cov: 12421 ft: 15202 corp: 14/448b lim: 50 exec/s: 0 rss: 73Mb L: 49/49 MS: 1 InsertRepeatedBytes- 00:07:41.274 [2024-12-12 06:45:48.661409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.274 [2024-12-12 06:45:48.661437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.274 [2024-12-12 06:45:48.661486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.274 [2024-12-12 06:45:48.661503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.274 [2024-12-12 06:45:48.661562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.274 [2024-12-12 06:45:48.661579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.274 [2024-12-12 06:45:48.661637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:41.274 [2024-12-12 06:45:48.661656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.274 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:41.274 #20 NEW cov: 12444 ft: 15236 corp: 15/497b lim: 50 exec/s: 0 rss: 73Mb L: 49/49 MS: 1 CrossOver- 00:07:41.274 [2024-12-12 06:45:48.721570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.274 [2024-12-12 06:45:48.721597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.274 [2024-12-12 06:45:48.721665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.274 [2024-12-12 06:45:48.721682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.274 [2024-12-12 06:45:48.721741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.274 [2024-12-12 06:45:48.721757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.274 [2024-12-12 06:45:48.721815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:41.274 [2024-12-12 06:45:48.721832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.274 #21 NEW cov: 12444 ft: 15273 corp: 16/537b lim: 50 exec/s: 0 rss: 73Mb L: 40/49 MS: 1 ChangeBinInt- 00:07:41.274 [2024-12-12 06:45:48.761673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.274 [2024-12-12 06:45:48.761700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.274 [2024-12-12 06:45:48.761765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.274 [2024-12-12 06:45:48.761782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.274 [2024-12-12 06:45:48.761841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.274 [2024-12-12 06:45:48.761858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.274 [2024-12-12 06:45:48.761917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:41.274 [2024-12-12 06:45:48.761934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.534 #22 NEW cov: 12444 ft: 15287 corp: 17/577b lim: 50 exec/s: 22 rss: 73Mb L: 40/49 MS: 1 InsertByte- 00:07:41.534 [2024-12-12 06:45:48.821835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.534 [2024-12-12 06:45:48.821862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.534 [2024-12-12 06:45:48.821931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.534 [2024-12-12 06:45:48.821949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.534 [2024-12-12 06:45:48.822006] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.534 [2024-12-12 06:45:48.822023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.534 [2024-12-12 06:45:48.822081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:41.534 [2024-12-12 06:45:48.822098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.534 #23 NEW cov: 12444 ft: 15296 corp: 18/622b lim: 50 exec/s: 23 rss: 73Mb L: 45/49 MS: 1 EraseBytes- 00:07:41.534 [2024-12-12 06:45:48.882028] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.534 [2024-12-12 06:45:48.882055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.534 [2024-12-12 06:45:48.882121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.534 [2024-12-12 06:45:48.882137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.534 [2024-12-12 06:45:48.882199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.534 [2024-12-12 06:45:48.882216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.534 [2024-12-12 06:45:48.882274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:41.534 [2024-12-12 06:45:48.882289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.534 #24 NEW cov: 12444 ft: 15359 corp: 19/670b lim: 50 exec/s: 24 rss: 73Mb L: 48/49 MS: 1 PersAutoDict- DE: "\001\002!P\375\236\006<"- 00:07:41.534 [2024-12-12 06:45:48.942329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.534 [2024-12-12 06:45:48.942358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.534 [2024-12-12 06:45:48.942432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.534 [2024-12-12 06:45:48.942449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.534 [2024-12-12 06:45:48.942509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.534 [2024-12-12 06:45:48.942524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.534 [2024-12-12 06:45:48.942582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:41.534 [2024-12-12 06:45:48.942598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.534 [2024-12-12 06:45:48.942656] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:41.534 [2024-12-12 06:45:48.942673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:41.534 #25 NEW cov: 12444 ft: 15408 corp: 20/720b lim: 50 exec/s: 25 rss: 73Mb L: 50/50 MS: 1 InsertByte- 00:07:41.534 [2024-12-12 06:45:49.002522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.534 [2024-12-12 06:45:49.002550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.534 [2024-12-12 06:45:49.002623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.534 [2024-12-12 06:45:49.002641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.534 [2024-12-12 06:45:49.002699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.534 [2024-12-12 06:45:49.002715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.534 [2024-12-12 06:45:49.002772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:41.534 [2024-12-12 06:45:49.002789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.534 [2024-12-12 06:45:49.002848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:41.534 [2024-12-12 06:45:49.002866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:41.534 #26 NEW cov: 12444 ft: 15416 corp: 21/770b lim: 50 exec/s: 26 rss: 73Mb L: 50/50 MS: 1 CrossOver- 00:07:41.534 [2024-12-12 06:45:49.042634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.534 [2024-12-12 06:45:49.042663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.534 [2024-12-12 06:45:49.042715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.534 [2024-12-12 06:45:49.042732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.534 [2024-12-12 06:45:49.042791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.534 [2024-12-12 06:45:49.042807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.534 [2024-12-12 06:45:49.042865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:41.534 [2024-12-12 06:45:49.042881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.534 [2024-12-12 06:45:49.042938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:41.534 [2024-12-12 06:45:49.042956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:41.794 #27 NEW cov: 12444 ft: 15430 corp: 22/820b lim: 50 exec/s: 27 rss: 73Mb L: 50/50 MS: 1 InsertByte- 00:07:41.794 [2024-12-12 06:45:49.082789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.794 [2024-12-12 06:45:49.082818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.794 [2024-12-12 06:45:49.082875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.794 [2024-12-12 06:45:49.082892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.794 [2024-12-12 06:45:49.082951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.794 [2024-12-12 06:45:49.082967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.794 [2024-12-12 06:45:49.083023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:41.794 [2024-12-12 06:45:49.083039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.794 [2024-12-12 06:45:49.083098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:41.794 [2024-12-12 06:45:49.083116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:41.794 #28 NEW cov: 12444 ft: 15443 corp: 23/870b lim: 50 exec/s: 28 rss: 73Mb L: 50/50 MS: 1 ChangeBinInt- 00:07:41.794 [2024-12-12 06:45:49.142468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.794 [2024-12-12 06:45:49.142496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.794 [2024-12-12 06:45:49.142548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.794 [2024-12-12 06:45:49.142567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.794 #29 NEW cov: 12444 ft: 15445 corp: 24/896b lim: 50 exec/s: 29 rss: 73Mb L: 26/50 MS: 1 PersAutoDict- DE: "\001\002!P\375\236\006<"- 00:07:41.794 [2024-12-12 06:45:49.203087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.794 [2024-12-12 06:45:49.203114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.794 [2024-12-12 06:45:49.203176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.794 [2024-12-12 06:45:49.203193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.794 [2024-12-12 06:45:49.203267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.794 [2024-12-12 06:45:49.203282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.794 [2024-12-12 06:45:49.203340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:41.794 [2024-12-12 06:45:49.203356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:41.794 [2024-12-12 06:45:49.203414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:41.794 [2024-12-12 06:45:49.203430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:41.794 #30 NEW cov: 12444 ft: 15454 corp: 25/946b lim: 50 exec/s: 30 rss: 73Mb L: 50/50 MS: 1 ChangeByte- 00:07:41.794 [2024-12-12 06:45:49.262939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.794 [2024-12-12 06:45:49.262967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.794 [2024-12-12 06:45:49.263025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.794 [2024-12-12 06:45:49.263043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.794 [2024-12-12 06:45:49.263102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.794 [2024-12-12 06:45:49.263120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.794 #32 NEW cov: 12444 ft: 15503 corp: 26/980b lim: 50 exec/s: 32 rss: 74Mb L: 34/50 MS: 2 PersAutoDict-CrossOver- DE: "\001\002!P\375\236\006<"- 00:07:41.794 [2024-12-12 06:45:49.303197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:41.794 [2024-12-12 06:45:49.303225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.794 [2024-12-12 06:45:49.303273] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:41.794 [2024-12-12 06:45:49.303290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.794 [2024-12-12 06:45:49.303346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:41.794 [2024-12-12 06:45:49.303362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.794 [2024-12-12 06:45:49.303418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:41.794 [2024-12-12 06:45:49.303435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.054 #33 NEW cov: 12444 ft: 15532 corp: 27/1021b lim: 50 exec/s: 33 rss: 74Mb L: 41/50 MS: 1 CrossOver- 00:07:42.054 [2024-12-12 06:45:49.363376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:42.054 [2024-12-12 06:45:49.363403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.054 [2024-12-12 06:45:49.363471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:42.054 [2024-12-12 06:45:49.363489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.054 [2024-12-12 06:45:49.363552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:42.054 [2024-12-12 06:45:49.363569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.054 [2024-12-12 06:45:49.363626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:42.054 [2024-12-12 06:45:49.363641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.054 #34 NEW cov: 12444 ft: 15540 corp: 28/1067b lim: 50 exec/s: 34 rss: 74Mb L: 46/50 MS: 1 EraseBytes- 00:07:42.054 [2024-12-12 06:45:49.423713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:42.054 [2024-12-12 06:45:49.423741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.054 [2024-12-12 06:45:49.423811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:42.054 [2024-12-12 06:45:49.423828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.054 [2024-12-12 06:45:49.423889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:42.054 [2024-12-12 06:45:49.423906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.054 [2024-12-12 06:45:49.423964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:42.054 [2024-12-12 06:45:49.423981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.054 [2024-12-12 06:45:49.424041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:42.054 [2024-12-12 06:45:49.424057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:42.054 #35 NEW cov: 12444 ft: 15592 corp: 29/1117b lim: 50 exec/s: 35 rss: 74Mb L: 50/50 MS: 1 ChangeBit- 00:07:42.054 [2024-12-12 06:45:49.483702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:42.054 [2024-12-12 06:45:49.483729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.054 [2024-12-12 06:45:49.483797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:42.054 [2024-12-12 06:45:49.483815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.054 [2024-12-12 06:45:49.483875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:42.054 [2024-12-12 06:45:49.483891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.054 [2024-12-12 06:45:49.483950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:42.054 [2024-12-12 06:45:49.483966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.054 #36 NEW cov: 12444 ft: 15695 corp: 30/1162b lim: 50 exec/s: 36 rss: 74Mb L: 45/50 MS: 1 ChangeBit- 00:07:42.054 [2024-12-12 06:45:49.543912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:42.054 [2024-12-12 06:45:49.543940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.054 [2024-12-12 06:45:49.544004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:42.054 [2024-12-12 06:45:49.544020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.054 [2024-12-12 06:45:49.544079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:42.054 [2024-12-12 06:45:49.544096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.054 [2024-12-12 06:45:49.544158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:42.054 [2024-12-12 06:45:49.544175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.314 #37 NEW cov: 12444 ft: 15729 corp: 31/1209b lim: 50 exec/s: 37 rss: 74Mb L: 47/50 MS: 1 CrossOver- 00:07:42.314 [2024-12-12 06:45:49.604145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:42.314 [2024-12-12 06:45:49.604177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.314 [2024-12-12 06:45:49.604236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:42.314 [2024-12-12 06:45:49.604252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.314 [2024-12-12 06:45:49.604311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:42.314 [2024-12-12 06:45:49.604327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.314 [2024-12-12 06:45:49.604385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:42.314 [2024-12-12 06:45:49.604401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.314 [2024-12-12 06:45:49.604460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:42.314 [2024-12-12 06:45:49.604474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:42.314 #38 NEW cov: 12444 ft: 15734 corp: 32/1259b lim: 50 exec/s: 38 rss: 74Mb L: 50/50 MS: 1 CopyPart- 00:07:42.314 [2024-12-12 06:45:49.643982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:42.314 [2024-12-12 06:45:49.644010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.314 [2024-12-12 06:45:49.644069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:42.314 [2024-12-12 06:45:49.644086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.314 [2024-12-12 06:45:49.644146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:42.314 [2024-12-12 06:45:49.644168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.314 #39 NEW cov: 12444 ft: 15742 corp: 33/1298b lim: 50 exec/s: 39 rss: 74Mb L: 39/50 MS: 1 CopyPart- 00:07:42.314 [2024-12-12 06:45:49.683920] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:42.314 [2024-12-12 06:45:49.683952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.314 [2024-12-12 06:45:49.684014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:42.314 [2024-12-12 06:45:49.684030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.314 #40 NEW cov: 12444 ft: 15749 corp: 34/1319b lim: 50 exec/s: 40 rss: 74Mb L: 21/50 MS: 1 ChangeByte- 00:07:42.314 [2024-12-12 06:45:49.744404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:42.314 [2024-12-12 06:45:49.744433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.314 [2024-12-12 06:45:49.744480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:42.314 [2024-12-12 06:45:49.744498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.314 [2024-12-12 06:45:49.744557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:42.314 [2024-12-12 06:45:49.744575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.314 [2024-12-12 06:45:49.744634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:42.314 [2024-12-12 06:45:49.744651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.314 #41 NEW cov: 12444 ft: 15777 corp: 35/1361b lim: 50 exec/s: 20 rss: 74Mb L: 42/50 MS: 1 PersAutoDict- DE: "\001\002!P\375\236\006<"- 00:07:42.314 #41 DONE cov: 12444 ft: 15777 corp: 35/1361b lim: 50 exec/s: 20 rss: 74Mb 00:07:42.314 ###### Recommended dictionary. ###### 00:07:42.314 "\001\002!P\375\236\006<" # Uses: 4 00:07:42.314 ###### End of recommended dictionary. ###### 00:07:42.314 Done 41 runs in 2 second(s) 00:07:42.618 06:45:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:42.618 06:45:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:42.618 06:45:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:42.618 06:45:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:42.618 06:45:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:42.618 06:45:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:42.618 06:45:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:42.618 06:45:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:42.618 06:45:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:42.618 06:45:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:42.618 06:45:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:42.618 06:45:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:42.618 06:45:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:42.618 06:45:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:42.618 06:45:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:42.618 06:45:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:42.618 06:45:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:42.618 06:45:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:42.618 06:45:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:42.618 [2024-12-12 06:45:49.938960] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:42.618 [2024-12-12 06:45:49.939041] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162966 ] 00:07:42.915 [2024-12-12 06:45:50.132875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.915 [2024-12-12 06:45:50.171706] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.915 [2024-12-12 06:45:50.230983] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.915 [2024-12-12 06:45:50.247330] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:42.915 INFO: Running with entropic power schedule (0xFF, 100). 00:07:42.915 INFO: Seed: 2989637687 00:07:42.915 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:42.915 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:42.915 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:42.915 INFO: A corpus is not provided, starting from an empty corpus 00:07:42.915 #2 INITED exec/s: 0 rss: 67Mb 00:07:42.915 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:42.915 This may also happen if the target rejected all inputs we tried so far 00:07:42.915 [2024-12-12 06:45:50.323909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:42.915 [2024-12-12 06:45:50.323946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.915 [2024-12-12 06:45:50.324066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:42.915 [2024-12-12 06:45:50.324093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.915 [2024-12-12 06:45:50.324218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:42.915 [2024-12-12 06:45:50.324243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.175 NEW_FUNC[1/718]: 0x463418 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:43.175 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:43.175 #6 NEW cov: 12225 ft: 12225 corp: 2/59b lim: 85 exec/s: 0 rss: 73Mb L: 58/58 MS: 4 CrossOver-CrossOver-ChangeBit-InsertRepeatedBytes- 00:07:43.175 [2024-12-12 06:45:50.664314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.175 [2024-12-12 06:45:50.664366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.434 #17 NEW cov: 12355 ft: 13794 corp: 3/89b lim: 85 exec/s: 0 rss: 73Mb L: 30/58 MS: 1 EraseBytes- 00:07:43.434 [2024-12-12 06:45:50.734755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.434 [2024-12-12 06:45:50.734790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.434 [2024-12-12 06:45:50.734928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.434 [2024-12-12 06:45:50.734950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.434 #18 NEW cov: 12361 ft: 14147 corp: 4/127b lim: 85 exec/s: 0 rss: 73Mb L: 38/58 MS: 1 EraseBytes- 00:07:43.434 [2024-12-12 06:45:50.784822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.434 [2024-12-12 06:45:50.784851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.434 [2024-12-12 06:45:50.784984] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.434 [2024-12-12 06:45:50.785005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.434 #19 NEW cov: 12446 ft: 14467 corp: 5/165b lim: 85 exec/s: 0 rss: 73Mb L: 38/58 MS: 1 ChangeBit- 00:07:43.434 [2024-12-12 06:45:50.855016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.434 [2024-12-12 06:45:50.855047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.434 [2024-12-12 06:45:50.855170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.434 [2024-12-12 06:45:50.855200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.434 #20 NEW cov: 12446 ft: 14610 corp: 6/204b lim: 85 exec/s: 0 rss: 73Mb L: 39/58 MS: 1 InsertByte- 00:07:43.434 [2024-12-12 06:45:50.905908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.434 [2024-12-12 06:45:50.905940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.434 [2024-12-12 06:45:50.906023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.434 [2024-12-12 06:45:50.906050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.434 [2024-12-12 06:45:50.906174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:43.434 [2024-12-12 06:45:50.906201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.434 [2024-12-12 06:45:50.906329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:43.434 [2024-12-12 06:45:50.906351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.434 #21 NEW cov: 12446 ft: 15085 corp: 7/273b lim: 85 exec/s: 0 rss: 74Mb L: 69/69 MS: 1 InsertRepeatedBytes- 00:07:43.693 [2024-12-12 06:45:50.975098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.693 [2024-12-12 06:45:50.975125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.693 #25 NEW cov: 12446 ft: 15209 corp: 8/306b lim: 85 exec/s: 0 rss: 74Mb L: 33/69 MS: 4 CopyPart-InsertByte-CopyPart-CrossOver- 00:07:43.693 [2024-12-12 06:45:51.025221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.693 [2024-12-12 06:45:51.025254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.693 #26 NEW cov: 12446 ft: 15280 corp: 9/336b lim: 85 exec/s: 0 rss: 74Mb L: 30/69 MS: 1 CrossOver- 00:07:43.693 [2024-12-12 06:45:51.095797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.693 [2024-12-12 06:45:51.095823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.693 [2024-12-12 06:45:51.095946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.693 [2024-12-12 06:45:51.095969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.693 #27 NEW cov: 12446 ft: 15325 corp: 10/374b lim: 85 exec/s: 0 rss: 74Mb L: 38/69 MS: 1 ChangeBit- 00:07:43.693 [2024-12-12 06:45:51.145984] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.693 [2024-12-12 06:45:51.146010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.693 [2024-12-12 06:45:51.146134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.693 [2024-12-12 06:45:51.146160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.693 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:43.693 #28 NEW cov: 12469 ft: 15411 corp: 11/412b lim: 85 exec/s: 0 rss: 74Mb L: 38/69 MS: 1 ShuffleBytes- 00:07:43.952 [2024-12-12 06:45:51.216486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.952 [2024-12-12 06:45:51.216521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.952 [2024-12-12 06:45:51.216624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.952 [2024-12-12 06:45:51.216651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.952 [2024-12-12 06:45:51.216777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:43.952 [2024-12-12 06:45:51.216808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.952 #29 NEW cov: 12469 ft: 15423 corp: 12/478b lim: 85 exec/s: 0 rss: 74Mb L: 66/69 MS: 1 CopyPart- 00:07:43.952 [2024-12-12 06:45:51.286436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.952 [2024-12-12 06:45:51.286472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.952 [2024-12-12 06:45:51.286615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.952 [2024-12-12 06:45:51.286637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.952 #30 NEW cov: 12469 ft: 15441 corp: 13/517b lim: 85 exec/s: 30 rss: 74Mb L: 39/69 MS: 1 ChangeBinInt- 00:07:43.952 [2024-12-12 06:45:51.356597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.952 [2024-12-12 06:45:51.356624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.952 [2024-12-12 06:45:51.356755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.952 [2024-12-12 06:45:51.356781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.952 #31 NEW cov: 12469 ft: 15452 corp: 14/558b lim: 85 exec/s: 31 rss: 74Mb L: 41/69 MS: 1 CrossOver- 00:07:43.952 [2024-12-12 06:45:51.406729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.952 [2024-12-12 06:45:51.406763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.952 [2024-12-12 06:45:51.406902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:43.952 [2024-12-12 06:45:51.406927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.952 #32 NEW cov: 12469 ft: 15477 corp: 15/598b lim: 85 exec/s: 32 rss: 74Mb L: 40/69 MS: 1 InsertByte- 00:07:43.952 [2024-12-12 06:45:51.456691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:43.952 [2024-12-12 06:45:51.456724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.211 #33 NEW cov: 12469 ft: 15531 corp: 16/628b lim: 85 exec/s: 33 rss: 74Mb L: 30/69 MS: 1 ChangeBit- 00:07:44.211 [2024-12-12 06:45:51.527180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.211 [2024-12-12 06:45:51.527214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.211 [2024-12-12 06:45:51.527340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.211 [2024-12-12 06:45:51.527367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.211 #34 NEW cov: 12469 ft: 15589 corp: 17/669b lim: 85 exec/s: 34 rss: 74Mb L: 41/69 MS: 1 ChangeByte- 00:07:44.211 [2024-12-12 06:45:51.597619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.211 [2024-12-12 06:45:51.597653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.211 [2024-12-12 06:45:51.597776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.211 [2024-12-12 06:45:51.597808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.211 [2024-12-12 06:45:51.597943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:44.211 [2024-12-12 06:45:51.597967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.211 #37 NEW cov: 12469 ft: 15596 corp: 18/735b lim: 85 exec/s: 37 rss: 74Mb L: 66/69 MS: 3 ShuffleBytes-ChangeBinInt-InsertRepeatedBytes- 00:07:44.211 [2024-12-12 06:45:51.647680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.211 [2024-12-12 06:45:51.647714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.211 [2024-12-12 06:45:51.647831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.211 [2024-12-12 06:45:51.647856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.211 [2024-12-12 06:45:51.647988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:44.211 [2024-12-12 06:45:51.648014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.211 #38 NEW cov: 12469 ft: 15624 corp: 19/801b lim: 85 exec/s: 38 rss: 74Mb L: 66/69 MS: 1 ShuffleBytes- 00:07:44.211 [2024-12-12 06:45:51.717683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.211 [2024-12-12 06:45:51.717710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.211 [2024-12-12 06:45:51.717840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.211 [2024-12-12 06:45:51.717864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.470 #39 NEW cov: 12469 ft: 15651 corp: 20/839b lim: 85 exec/s: 39 rss: 75Mb L: 38/69 MS: 1 ChangeBit- 00:07:44.470 [2024-12-12 06:45:51.787675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.470 [2024-12-12 06:45:51.787707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.470 #40 NEW cov: 12469 ft: 15674 corp: 21/869b lim: 85 exec/s: 40 rss: 75Mb L: 30/69 MS: 1 ChangeBinInt- 00:07:44.470 [2024-12-12 06:45:51.838055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.470 [2024-12-12 06:45:51.838089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.470 [2024-12-12 06:45:51.838229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.470 [2024-12-12 06:45:51.838251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.470 #41 NEW cov: 12469 ft: 15683 corp: 22/903b lim: 85 exec/s: 41 rss: 75Mb L: 34/69 MS: 1 InsertByte- 00:07:44.470 [2024-12-12 06:45:51.887870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.470 [2024-12-12 06:45:51.887897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.470 #42 NEW cov: 12469 ft: 15724 corp: 23/933b lim: 85 exec/s: 42 rss: 75Mb L: 30/69 MS: 1 ChangeBinInt- 00:07:44.470 [2024-12-12 06:45:51.938028] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.470 [2024-12-12 06:45:51.938055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.470 #43 NEW cov: 12469 ft: 15747 corp: 24/963b lim: 85 exec/s: 43 rss: 75Mb L: 30/69 MS: 1 ShuffleBytes- 00:07:44.470 [2024-12-12 06:45:51.988166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.470 [2024-12-12 06:45:51.988201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.729 #44 NEW cov: 12469 ft: 15760 corp: 25/993b lim: 85 exec/s: 44 rss: 75Mb L: 30/69 MS: 1 ShuffleBytes- 00:07:44.729 [2024-12-12 06:45:52.058647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.729 [2024-12-12 06:45:52.058676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.729 [2024-12-12 06:45:52.058806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.729 [2024-12-12 06:45:52.058835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.729 #45 NEW cov: 12469 ft: 15766 corp: 26/1034b lim: 85 exec/s: 45 rss: 75Mb L: 41/69 MS: 1 CrossOver- 00:07:44.729 [2024-12-12 06:45:52.129000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.729 [2024-12-12 06:45:52.129027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.729 [2024-12-12 06:45:52.129160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.729 [2024-12-12 06:45:52.129194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.729 #46 NEW cov: 12469 ft: 15771 corp: 27/1073b lim: 85 exec/s: 46 rss: 75Mb L: 39/69 MS: 1 ShuffleBytes- 00:07:44.729 [2024-12-12 06:45:52.199191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.729 [2024-12-12 06:45:52.199222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.729 [2024-12-12 06:45:52.199358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.729 [2024-12-12 06:45:52.199384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.729 #47 NEW cov: 12469 ft: 15843 corp: 28/1114b lim: 85 exec/s: 47 rss: 75Mb L: 41/69 MS: 1 ChangeBinInt- 00:07:44.729 [2024-12-12 06:45:52.249100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.729 [2024-12-12 06:45:52.249130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.988 #48 NEW cov: 12469 ft: 15887 corp: 29/1138b lim: 85 exec/s: 48 rss: 75Mb L: 24/69 MS: 1 EraseBytes- 00:07:44.989 [2024-12-12 06:45:52.299477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:44.989 [2024-12-12 06:45:52.299504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.989 [2024-12-12 06:45:52.299633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:44.989 [2024-12-12 06:45:52.299658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.989 #49 NEW cov: 12469 ft: 15890 corp: 30/1176b lim: 85 exec/s: 24 rss: 75Mb L: 38/69 MS: 1 ShuffleBytes- 00:07:44.989 #49 DONE cov: 12469 ft: 15890 corp: 30/1176b lim: 85 exec/s: 24 rss: 75Mb 00:07:44.989 Done 49 runs in 2 second(s) 00:07:44.989 06:45:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:44.989 06:45:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:44.989 06:45:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:44.989 06:45:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:44.989 06:45:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:44.989 06:45:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:44.989 06:45:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:44.989 06:45:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:44.989 06:45:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:44.989 06:45:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:44.989 06:45:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:44.989 06:45:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:44.989 06:45:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:44.989 06:45:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:44.989 06:45:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:44.989 06:45:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:44.989 06:45:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:44.989 06:45:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:44.989 06:45:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:44.989 [2024-12-12 06:45:52.493063] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:44.989 [2024-12-12 06:45:52.493132] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163372 ] 00:07:45.248 [2024-12-12 06:45:52.677001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.248 [2024-12-12 06:45:52.713777] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.507 [2024-12-12 06:45:52.773158] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.507 [2024-12-12 06:45:52.789444] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:45.507 INFO: Running with entropic power schedule (0xFF, 100). 00:07:45.507 INFO: Seed: 1238664073 00:07:45.507 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:45.507 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:45.507 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:45.507 INFO: A corpus is not provided, starting from an empty corpus 00:07:45.507 #2 INITED exec/s: 0 rss: 66Mb 00:07:45.507 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:45.507 This may also happen if the target rejected all inputs we tried so far 00:07:45.507 [2024-12-12 06:45:52.865864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:45.507 [2024-12-12 06:45:52.865897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.507 [2024-12-12 06:45:52.866029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:45.507 [2024-12-12 06:45:52.866055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.507 [2024-12-12 06:45:52.866184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:45.507 [2024-12-12 06:45:52.866209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.766 NEW_FUNC[1/717]: 0x466658 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:45.767 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:45.767 #11 NEW cov: 12158 ft: 12159 corp: 2/16b lim: 25 exec/s: 0 rss: 72Mb L: 15/15 MS: 4 ChangeBit-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:07:45.767 [2024-12-12 06:45:53.206814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:45.767 [2024-12-12 06:45:53.206857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.767 [2024-12-12 06:45:53.206994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:45.767 [2024-12-12 06:45:53.207018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.767 [2024-12-12 06:45:53.207155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:45.767 [2024-12-12 06:45:53.207170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:45.767 #12 NEW cov: 12288 ft: 12703 corp: 3/31b lim: 25 exec/s: 0 rss: 72Mb L: 15/15 MS: 1 CrossOver- 00:07:45.767 [2024-12-12 06:45:53.276959] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:45.767 [2024-12-12 06:45:53.276993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:45.767 [2024-12-12 06:45:53.277117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:45.767 [2024-12-12 06:45:53.277152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:45.767 [2024-12-12 06:45:53.277297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:45.767 [2024-12-12 06:45:53.277318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.026 #13 NEW cov: 12294 ft: 12892 corp: 4/46b lim: 25 exec/s: 0 rss: 72Mb L: 15/15 MS: 1 ChangeBinInt- 00:07:46.026 [2024-12-12 06:45:53.347080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.026 [2024-12-12 06:45:53.347115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.026 [2024-12-12 06:45:53.347250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:46.026 [2024-12-12 06:45:53.347276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.026 [2024-12-12 06:45:53.347413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:46.026 [2024-12-12 06:45:53.347439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.026 #14 NEW cov: 12379 ft: 13285 corp: 5/61b lim: 25 exec/s: 0 rss: 72Mb L: 15/15 MS: 1 ShuffleBytes- 00:07:46.026 [2024-12-12 06:45:53.397259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.026 [2024-12-12 06:45:53.397293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.026 [2024-12-12 06:45:53.397423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:46.026 [2024-12-12 06:45:53.397453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.026 [2024-12-12 06:45:53.397602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:46.026 [2024-12-12 06:45:53.397632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.026 #15 NEW cov: 12379 ft: 13352 corp: 6/76b lim: 25 exec/s: 0 rss: 72Mb L: 15/15 MS: 1 ShuffleBytes- 00:07:46.026 [2024-12-12 06:45:53.447159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.026 [2024-12-12 06:45:53.447187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.026 #16 NEW cov: 12379 ft: 13838 corp: 7/85b lim: 25 exec/s: 0 rss: 72Mb L: 9/15 MS: 1 EraseBytes- 00:07:46.026 [2024-12-12 06:45:53.497787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.026 [2024-12-12 06:45:53.497817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.026 [2024-12-12 06:45:53.497904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:46.026 [2024-12-12 06:45:53.497932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.026 [2024-12-12 06:45:53.498079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:46.026 [2024-12-12 06:45:53.498108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.026 [2024-12-12 06:45:53.498250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:46.026 [2024-12-12 06:45:53.498274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:46.026 #17 NEW cov: 12379 ft: 14291 corp: 8/108b lim: 25 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:07:46.026 [2024-12-12 06:45:53.547663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.026 [2024-12-12 06:45:53.547694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.026 [2024-12-12 06:45:53.547826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:46.026 [2024-12-12 06:45:53.547852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.286 #23 NEW cov: 12379 ft: 14575 corp: 9/122b lim: 25 exec/s: 0 rss: 72Mb L: 14/23 MS: 1 EraseBytes- 00:07:46.286 [2024-12-12 06:45:53.618011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.286 [2024-12-12 06:45:53.618046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.286 [2024-12-12 06:45:53.618160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:46.286 [2024-12-12 06:45:53.618186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.286 [2024-12-12 06:45:53.618318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:46.286 [2024-12-12 06:45:53.618343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.286 #24 NEW cov: 12379 ft: 14663 corp: 10/137b lim: 25 exec/s: 0 rss: 72Mb L: 15/23 MS: 1 ChangeBinInt- 00:07:46.286 [2024-12-12 06:45:53.667732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.286 [2024-12-12 06:45:53.667767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.286 #25 NEW cov: 12379 ft: 14705 corp: 11/143b lim: 25 exec/s: 0 rss: 72Mb L: 6/23 MS: 1 CrossOver- 00:07:46.286 [2024-12-12 06:45:53.718356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.286 [2024-12-12 06:45:53.718387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.286 [2024-12-12 06:45:53.718530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:46.286 [2024-12-12 06:45:53.718555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.286 [2024-12-12 06:45:53.718695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:46.286 [2024-12-12 06:45:53.718720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.286 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:46.286 #26 NEW cov: 12402 ft: 14748 corp: 12/158b lim: 25 exec/s: 0 rss: 73Mb L: 15/23 MS: 1 ChangeByte- 00:07:46.286 [2024-12-12 06:45:53.788114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.286 [2024-12-12 06:45:53.788156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.545 #27 NEW cov: 12402 ft: 14776 corp: 13/167b lim: 25 exec/s: 0 rss: 73Mb L: 9/23 MS: 1 CopyPart- 00:07:46.545 [2024-12-12 06:45:53.858318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.545 [2024-12-12 06:45:53.858351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.545 #28 NEW cov: 12402 ft: 14820 corp: 14/173b lim: 25 exec/s: 28 rss: 73Mb L: 6/23 MS: 1 ChangeBinInt- 00:07:46.545 [2024-12-12 06:45:53.928745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.545 [2024-12-12 06:45:53.928778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.545 [2024-12-12 06:45:53.928920] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:46.545 [2024-12-12 06:45:53.928946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.545 #29 NEW cov: 12402 ft: 14936 corp: 15/187b lim: 25 exec/s: 29 rss: 73Mb L: 14/23 MS: 1 EraseBytes- 00:07:46.545 [2024-12-12 06:45:53.979050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.545 [2024-12-12 06:45:53.979084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.545 [2024-12-12 06:45:53.979212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:46.545 [2024-12-12 06:45:53.979237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.545 [2024-12-12 06:45:53.979366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:46.545 [2024-12-12 06:45:53.979390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.545 #30 NEW cov: 12402 ft: 14949 corp: 16/202b lim: 25 exec/s: 30 rss: 73Mb L: 15/23 MS: 1 ChangeBinInt- 00:07:46.545 [2024-12-12 06:45:54.049280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.545 [2024-12-12 06:45:54.049311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.545 [2024-12-12 06:45:54.049428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:46.545 [2024-12-12 06:45:54.049466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.545 [2024-12-12 06:45:54.049612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:46.545 [2024-12-12 06:45:54.049639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.804 #31 NEW cov: 12402 ft: 14970 corp: 17/217b lim: 25 exec/s: 31 rss: 73Mb L: 15/23 MS: 1 CopyPart- 00:07:46.804 [2024-12-12 06:45:54.119691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.804 [2024-12-12 06:45:54.119722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.805 [2024-12-12 06:45:54.119802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:46.805 [2024-12-12 06:45:54.119824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.805 [2024-12-12 06:45:54.119964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:46.805 [2024-12-12 06:45:54.119992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.805 [2024-12-12 06:45:54.120138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:46.805 [2024-12-12 06:45:54.120166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:46.805 #32 NEW cov: 12402 ft: 14999 corp: 18/238b lim: 25 exec/s: 32 rss: 73Mb L: 21/23 MS: 1 InsertRepeatedBytes- 00:07:46.805 [2024-12-12 06:45:54.189978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.805 [2024-12-12 06:45:54.190009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.805 [2024-12-12 06:45:54.190106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:46.805 [2024-12-12 06:45:54.190134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.805 [2024-12-12 06:45:54.190263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:46.805 [2024-12-12 06:45:54.190288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.805 [2024-12-12 06:45:54.190430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:46.805 [2024-12-12 06:45:54.190457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:46.805 #33 NEW cov: 12402 ft: 15087 corp: 19/261b lim: 25 exec/s: 33 rss: 73Mb L: 23/23 MS: 1 ChangeBit- 00:07:46.805 [2024-12-12 06:45:54.240176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.805 [2024-12-12 06:45:54.240211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:46.805 [2024-12-12 06:45:54.240321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:46.805 [2024-12-12 06:45:54.240347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:46.805 [2024-12-12 06:45:54.240482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:46.805 [2024-12-12 06:45:54.240510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:46.805 [2024-12-12 06:45:54.240651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:46.805 [2024-12-12 06:45:54.240677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:46.805 #34 NEW cov: 12402 ft: 15164 corp: 20/284b lim: 25 exec/s: 34 rss: 73Mb L: 23/23 MS: 1 ShuffleBytes- 00:07:46.805 [2024-12-12 06:45:54.289713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:46.805 [2024-12-12 06:45:54.289740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.064 #40 NEW cov: 12402 ft: 15169 corp: 21/293b lim: 25 exec/s: 40 rss: 73Mb L: 9/23 MS: 1 ShuffleBytes- 00:07:47.064 [2024-12-12 06:45:54.359935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:47.064 [2024-12-12 06:45:54.359974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.064 #45 NEW cov: 12402 ft: 15208 corp: 22/298b lim: 25 exec/s: 45 rss: 73Mb L: 5/23 MS: 5 CrossOver-ChangeBit-InsertByte-ChangeByte-CopyPart- 00:07:47.064 [2024-12-12 06:45:54.410541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:47.064 [2024-12-12 06:45:54.410570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.064 [2024-12-12 06:45:54.410675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:47.064 [2024-12-12 06:45:54.410700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.064 [2024-12-12 06:45:54.410841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:47.064 [2024-12-12 06:45:54.410862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.064 #46 NEW cov: 12402 ft: 15228 corp: 23/313b lim: 25 exec/s: 46 rss: 73Mb L: 15/23 MS: 1 ChangeBit- 00:07:47.064 [2024-12-12 06:45:54.480660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:47.064 [2024-12-12 06:45:54.480697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.064 [2024-12-12 06:45:54.480813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:47.064 [2024-12-12 06:45:54.480842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.064 [2024-12-12 06:45:54.480987] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:47.064 [2024-12-12 06:45:54.481014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.064 #47 NEW cov: 12402 ft: 15248 corp: 24/328b lim: 25 exec/s: 47 rss: 73Mb L: 15/23 MS: 1 ShuffleBytes- 00:07:47.064 [2024-12-12 06:45:54.530650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:47.064 [2024-12-12 06:45:54.530684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.064 [2024-12-12 06:45:54.530794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:47.064 [2024-12-12 06:45:54.530838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.064 #49 NEW cov: 12402 ft: 15256 corp: 25/340b lim: 25 exec/s: 49 rss: 74Mb L: 12/23 MS: 2 EraseBytes-CMP- DE: "\377\377\377\377\377\377\377\377"- 00:07:47.323 [2024-12-12 06:45:54.600670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:47.323 [2024-12-12 06:45:54.600698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.323 #50 NEW cov: 12402 ft: 15269 corp: 26/349b lim: 25 exec/s: 50 rss: 74Mb L: 9/23 MS: 1 ShuffleBytes- 00:07:47.323 [2024-12-12 06:45:54.651219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:47.323 [2024-12-12 06:45:54.651254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.323 [2024-12-12 06:45:54.651338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:47.323 [2024-12-12 06:45:54.651364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.323 [2024-12-12 06:45:54.651501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:47.323 [2024-12-12 06:45:54.651532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.323 #51 NEW cov: 12402 ft: 15278 corp: 27/364b lim: 25 exec/s: 51 rss: 74Mb L: 15/23 MS: 1 ChangeBinInt- 00:07:47.323 [2024-12-12 06:45:54.721415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:47.323 [2024-12-12 06:45:54.721447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.323 [2024-12-12 06:45:54.721557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:47.323 [2024-12-12 06:45:54.721581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.323 [2024-12-12 06:45:54.721725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:47.323 [2024-12-12 06:45:54.721755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.323 #52 NEW cov: 12402 ft: 15284 corp: 28/379b lim: 25 exec/s: 52 rss: 74Mb L: 15/23 MS: 1 ChangeBinInt- 00:07:47.323 [2024-12-12 06:45:54.771838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:47.323 [2024-12-12 06:45:54.771873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.323 [2024-12-12 06:45:54.771985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:47.323 [2024-12-12 06:45:54.772012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.323 [2024-12-12 06:45:54.772154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:47.323 [2024-12-12 06:45:54.772175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.323 [2024-12-12 06:45:54.772309] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:47.323 [2024-12-12 06:45:54.772333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:47.323 #53 NEW cov: 12402 ft: 15322 corp: 29/402b lim: 25 exec/s: 53 rss: 74Mb L: 23/23 MS: 1 CrossOver- 00:07:47.323 [2024-12-12 06:45:54.841798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:47.323 [2024-12-12 06:45:54.841829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.323 [2024-12-12 06:45:54.841915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:47.323 [2024-12-12 06:45:54.841944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:47.323 [2024-12-12 06:45:54.842080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:47.323 [2024-12-12 06:45:54.842109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:47.583 #54 NEW cov: 12402 ft: 15334 corp: 30/417b lim: 25 exec/s: 27 rss: 74Mb L: 15/23 MS: 1 ChangeBit- 00:07:47.583 #54 DONE cov: 12402 ft: 15334 corp: 30/417b lim: 25 exec/s: 27 rss: 74Mb 00:07:47.583 ###### Recommended dictionary. ###### 00:07:47.583 "\377\377\377\377\377\377\377\377" # Uses: 0 00:07:47.583 ###### End of recommended dictionary. ###### 00:07:47.583 Done 54 runs in 2 second(s) 00:07:47.583 06:45:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:47.583 06:45:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:47.583 06:45:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:47.583 06:45:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:47.583 06:45:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:47.583 06:45:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:47.583 06:45:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:47.583 06:45:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:47.583 06:45:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:47.583 06:45:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:47.583 06:45:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:47.583 06:45:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:47.583 06:45:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:47.583 06:45:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:47.583 06:45:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:47.583 06:45:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:47.583 06:45:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:47.583 06:45:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:47.583 06:45:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:47.583 [2024-12-12 06:45:55.013834] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:47.583 [2024-12-12 06:45:55.013911] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163912 ] 00:07:47.842 [2024-12-12 06:45:55.206162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.842 [2024-12-12 06:45:55.238935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.842 [2024-12-12 06:45:55.297801] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.842 [2024-12-12 06:45:55.314053] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:47.842 INFO: Running with entropic power schedule (0xFF, 100). 00:07:47.842 INFO: Seed: 3761682376 00:07:47.842 INFO: Loaded 1 modules (390948 inline 8-bit counters): 390948 [0x2c8afcc, 0x2cea6f0), 00:07:47.842 INFO: Loaded 1 PC tables (390948 PCs): 390948 [0x2cea6f0,0x32e1930), 00:07:47.842 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:47.842 INFO: A corpus is not provided, starting from an empty corpus 00:07:47.842 #2 INITED exec/s: 0 rss: 65Mb 00:07:47.842 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:47.842 This may also happen if the target rejected all inputs we tried so far 00:07:47.842 [2024-12-12 06:45:55.359428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.842 [2024-12-12 06:45:55.359458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:47.842 [2024-12-12 06:45:55.359510] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12514849900987264429 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.842 [2024-12-12 06:45:55.359527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.360 NEW_FUNC[1/718]: 0x467748 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:48.360 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:48.360 #14 NEW cov: 12248 ft: 12247 corp: 2/44b lim: 100 exec/s: 0 rss: 72Mb L: 43/43 MS: 2 CMP-InsertRepeatedBytes- DE: "\001\000\000\000\000\000\000\000"- 00:07:48.360 [2024-12-12 06:45:55.670302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.360 [2024-12-12 06:45:55.670338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.360 [2024-12-12 06:45:55.670408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12514849900987264429 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.360 [2024-12-12 06:45:55.670426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.360 #15 NEW cov: 12361 ft: 12732 corp: 3/89b lim: 100 exec/s: 0 rss: 72Mb L: 45/45 MS: 1 CMP- DE: "\000\002"- 00:07:48.360 [2024-12-12 06:45:55.730365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.360 [2024-12-12 06:45:55.730393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.360 [2024-12-12 06:45:55.730444] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12514849900987264429 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.360 [2024-12-12 06:45:55.730464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.360 #16 NEW cov: 12367 ft: 13012 corp: 4/134b lim: 100 exec/s: 0 rss: 72Mb L: 45/45 MS: 1 ChangeBinInt- 00:07:48.360 [2024-12-12 06:45:55.790366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.360 [2024-12-12 06:45:55.790396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.360 #17 NEW cov: 12452 ft: 14049 corp: 5/157b lim: 100 exec/s: 0 rss: 72Mb L: 23/45 MS: 1 EraseBytes- 00:07:48.360 [2024-12-12 06:45:55.830468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:753908673523117 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.360 [2024-12-12 06:45:55.830497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.360 #18 NEW cov: 12452 ft: 14116 corp: 6/182b lim: 100 exec/s: 0 rss: 72Mb L: 25/45 MS: 1 PersAutoDict- DE: "\000\002"- 00:07:48.620 [2024-12-12 06:45:55.890829] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.620 [2024-12-12 06:45:55.890859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.620 [2024-12-12 06:45:55.890925] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12514849900987264429 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.620 [2024-12-12 06:45:55.890946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.620 #19 NEW cov: 12452 ft: 14318 corp: 7/225b lim: 100 exec/s: 0 rss: 72Mb L: 43/45 MS: 1 ShuffleBytes- 00:07:48.620 [2024-12-12 06:45:55.930795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.620 [2024-12-12 06:45:55.930823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.620 #20 NEW cov: 12452 ft: 14415 corp: 8/256b lim: 100 exec/s: 0 rss: 72Mb L: 31/45 MS: 1 EraseBytes- 00:07:48.620 [2024-12-12 06:45:55.990956] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.620 [2024-12-12 06:45:55.990983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.620 #21 NEW cov: 12452 ft: 14492 corp: 9/288b lim: 100 exec/s: 0 rss: 72Mb L: 32/45 MS: 1 InsertByte- 00:07:48.620 [2024-12-12 06:45:56.051235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.620 [2024-12-12 06:45:56.051265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.620 [2024-12-12 06:45:56.051335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:48886129511890944 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.620 [2024-12-12 06:45:56.051352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.620 #22 NEW cov: 12452 ft: 14584 corp: 10/331b lim: 100 exec/s: 0 rss: 73Mb L: 43/45 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:48.620 [2024-12-12 06:45:56.111429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.620 [2024-12-12 06:45:56.111456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.620 [2024-12-12 06:45:56.111508] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12514849900987264429 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.620 [2024-12-12 06:45:56.111523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.620 #23 NEW cov: 12452 ft: 14708 corp: 11/378b lim: 100 exec/s: 0 rss: 73Mb L: 47/47 MS: 1 CMP- DE: "\377\377\377\020"- 00:07:48.879 [2024-12-12 06:45:56.151469] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.879 [2024-12-12 06:45:56.151496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.879 [2024-12-12 06:45:56.151539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:48886129511890944 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.879 [2024-12-12 06:45:56.151555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.879 #24 NEW cov: 12452 ft: 14723 corp: 12/421b lim: 100 exec/s: 0 rss: 73Mb L: 43/47 MS: 1 ChangeBit- 00:07:48.879 [2024-12-12 06:45:56.211685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.879 [2024-12-12 06:45:56.211712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.879 [2024-12-12 06:45:56.211755] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12514849898839780781 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.879 [2024-12-12 06:45:56.211771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.879 #25 NEW cov: 12452 ft: 14771 corp: 13/466b lim: 100 exec/s: 0 rss: 73Mb L: 45/47 MS: 1 ChangeBinInt- 00:07:48.879 [2024-12-12 06:45:56.251799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578189 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.879 [2024-12-12 06:45:56.251826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.879 [2024-12-12 06:45:56.251864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12514849898839780781 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.879 [2024-12-12 06:45:56.251880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.879 NEW_FUNC[1/1]: 0x1c586b8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:48.879 #26 NEW cov: 12475 ft: 14812 corp: 14/511b lim: 100 exec/s: 0 rss: 73Mb L: 45/47 MS: 1 ChangeBit- 00:07:48.880 [2024-12-12 06:45:56.311975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.880 [2024-12-12 06:45:56.312002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.880 [2024-12-12 06:45:56.312056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:65536 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.880 [2024-12-12 06:45:56.312072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.880 #27 NEW cov: 12475 ft: 14854 corp: 15/558b lim: 100 exec/s: 0 rss: 73Mb L: 47/47 MS: 1 CMP- DE: "\016\000\000\000"- 00:07:48.880 [2024-12-12 06:45:56.352102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898627771821 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.880 [2024-12-12 06:45:56.352129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.880 [2024-12-12 06:45:56.352196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12514849900987264429 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.880 [2024-12-12 06:45:56.352214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:48.880 #31 NEW cov: 12475 ft: 14868 corp: 16/599b lim: 100 exec/s: 31 rss: 73Mb L: 41/47 MS: 4 CrossOver-ChangeBinInt-ChangeByte-CrossOver- 00:07:48.880 [2024-12-12 06:45:56.392269] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.880 [2024-12-12 06:45:56.392296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:48.880 [2024-12-12 06:45:56.392351] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12514849900987264429 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.880 [2024-12-12 06:45:56.392368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.139 #32 NEW cov: 12475 ft: 14882 corp: 17/644b lim: 100 exec/s: 32 rss: 73Mb L: 45/47 MS: 1 ShuffleBytes- 00:07:49.139 [2024-12-12 06:45:56.432329] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.139 [2024-12-12 06:45:56.432358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.139 [2024-12-12 06:45:56.432415] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12514849900987264429 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.139 [2024-12-12 06:45:56.432430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.139 #33 NEW cov: 12475 ft: 14921 corp: 18/687b lim: 100 exec/s: 33 rss: 73Mb L: 43/47 MS: 1 PersAutoDict- DE: "\000\002"- 00:07:49.139 [2024-12-12 06:45:56.472297] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.139 [2024-12-12 06:45:56.472325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.139 #34 NEW cov: 12475 ft: 14925 corp: 19/707b lim: 100 exec/s: 34 rss: 73Mb L: 20/47 MS: 1 EraseBytes- 00:07:49.139 [2024-12-12 06:45:56.512390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.139 [2024-12-12 06:45:56.512417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.139 #35 NEW cov: 12475 ft: 14950 corp: 20/730b lim: 100 exec/s: 35 rss: 73Mb L: 23/47 MS: 1 CopyPart- 00:07:49.140 [2024-12-12 06:45:56.552683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.140 [2024-12-12 06:45:56.552709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.140 [2024-12-12 06:45:56.552766] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12514849900987264429 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.140 [2024-12-12 06:45:56.552783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.140 #36 NEW cov: 12475 ft: 14964 corp: 21/775b lim: 100 exec/s: 36 rss: 73Mb L: 45/47 MS: 1 CopyPart- 00:07:49.140 [2024-12-12 06:45:56.592629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.140 [2024-12-12 06:45:56.592656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.140 #37 NEW cov: 12475 ft: 14994 corp: 22/798b lim: 100 exec/s: 37 rss: 73Mb L: 23/47 MS: 1 EraseBytes- 00:07:49.140 [2024-12-12 06:45:56.632867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898627771821 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.140 [2024-12-12 06:45:56.632893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.140 [2024-12-12 06:45:56.632948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12514849900987264429 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.140 [2024-12-12 06:45:56.632964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.399 #38 NEW cov: 12475 ft: 15028 corp: 23/841b lim: 100 exec/s: 38 rss: 73Mb L: 43/47 MS: 1 CrossOver- 00:07:49.399 [2024-12-12 06:45:56.692945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.399 [2024-12-12 06:45:56.692972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.399 #39 NEW cov: 12475 ft: 15053 corp: 24/870b lim: 100 exec/s: 39 rss: 73Mb L: 29/47 MS: 1 EraseBytes- 00:07:49.399 [2024-12-12 06:45:56.753248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898627771821 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.399 [2024-12-12 06:45:56.753275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.399 [2024-12-12 06:45:56.753330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12514849900987264429 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.399 [2024-12-12 06:45:56.753349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.399 #40 NEW cov: 12475 ft: 15059 corp: 25/914b lim: 100 exec/s: 40 rss: 73Mb L: 44/47 MS: 1 InsertByte- 00:07:49.399 [2024-12-12 06:45:56.813260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514700364671200685 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.399 [2024-12-12 06:45:56.813287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.399 #41 NEW cov: 12475 ft: 15085 corp: 26/943b lim: 100 exec/s: 41 rss: 73Mb L: 29/47 MS: 1 ChangeByte- 00:07:49.399 [2024-12-12 06:45:56.873587] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.399 [2024-12-12 06:45:56.873614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.399 [2024-12-12 06:45:56.873655] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12514849900987236781 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.399 [2024-12-12 06:45:56.873672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.399 #42 NEW cov: 12475 ft: 15093 corp: 27/987b lim: 100 exec/s: 42 rss: 73Mb L: 44/47 MS: 1 InsertByte- 00:07:49.658 [2024-12-12 06:45:56.933712] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849902368390922 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.658 [2024-12-12 06:45:56.933739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.658 [2024-12-12 06:45:56.933794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12484450603502513581 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.658 [2024-12-12 06:45:56.933814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.658 #43 NEW cov: 12475 ft: 15108 corp: 28/1034b lim: 100 exec/s: 43 rss: 74Mb L: 47/47 MS: 1 InsertRepeatedBytes- 00:07:49.658 [2024-12-12 06:45:56.993912] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4195730023803140666 len:14907 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.658 [2024-12-12 06:45:56.993939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.658 [2024-12-12 06:45:56.994008] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4195730024608447034 len:14907 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.659 [2024-12-12 06:45:56.994024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.659 #47 NEW cov: 12475 ft: 15117 corp: 29/1091b lim: 100 exec/s: 47 rss: 74Mb L: 57/57 MS: 4 CrossOver-ChangeByte-ChangeBit-InsertRepeatedBytes- 00:07:49.659 [2024-12-12 06:45:57.034038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.659 [2024-12-12 06:45:57.034067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.659 [2024-12-12 06:45:57.034120] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.659 [2024-12-12 06:45:57.034136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.659 #48 NEW cov: 12475 ft: 15122 corp: 30/1145b lim: 100 exec/s: 48 rss: 74Mb L: 54/57 MS: 1 InsertRepeatedBytes- 00:07:49.659 [2024-12-12 06:45:57.074124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44289 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.659 [2024-12-12 06:45:57.074155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.659 [2024-12-12 06:45:57.074212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4732629744891047341 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.659 [2024-12-12 06:45:57.074227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.659 #49 NEW cov: 12475 ft: 15160 corp: 31/1191b lim: 100 exec/s: 49 rss: 74Mb L: 46/57 MS: 1 PersAutoDict- DE: "\000\002"- 00:07:49.659 [2024-12-12 06:45:57.114390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.659 [2024-12-12 06:45:57.114419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.659 [2024-12-12 06:45:57.114459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.659 [2024-12-12 06:45:57.114476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.659 [2024-12-12 06:45:57.114531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.659 [2024-12-12 06:45:57.114547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.659 #50 NEW cov: 12475 ft: 15480 corp: 32/1255b lim: 100 exec/s: 50 rss: 74Mb L: 64/64 MS: 1 InsertRepeatedBytes- 00:07:49.659 [2024-12-12 06:45:57.154231] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514700364671200685 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.659 [2024-12-12 06:45:57.154259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.919 #51 NEW cov: 12475 ft: 15505 corp: 33/1284b lim: 100 exec/s: 51 rss: 74Mb L: 29/64 MS: 1 CopyPart- 00:07:49.919 [2024-12-12 06:45:57.214594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849902368390922 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.919 [2024-12-12 06:45:57.214622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.919 [2024-12-12 06:45:57.214678] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12514849437130796461 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.919 [2024-12-12 06:45:57.214695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.919 #52 NEW cov: 12475 ft: 15535 corp: 34/1331b lim: 100 exec/s: 52 rss: 74Mb L: 47/64 MS: 1 ShuffleBytes- 00:07:49.919 [2024-12-12 06:45:57.274698] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.919 [2024-12-12 06:45:57.274726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.919 [2024-12-12 06:45:57.274784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12514849900987264429 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.919 [2024-12-12 06:45:57.274801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.919 #53 NEW cov: 12475 ft: 15567 corp: 35/1376b lim: 100 exec/s: 53 rss: 74Mb L: 45/64 MS: 1 CrossOver- 00:07:49.919 [2024-12-12 06:45:57.314840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12514849898252578221 len:19969 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.919 [2024-12-12 06:45:57.314868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.919 [2024-12-12 06:45:57.314916] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12514849900987264429 len:44462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.919 [2024-12-12 06:45:57.314933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.919 #54 NEW cov: 12475 ft: 15576 corp: 36/1421b lim: 100 exec/s: 54 rss: 74Mb L: 45/64 MS: 1 CrossOver- 00:07:49.919 [2024-12-12 06:45:57.355271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.919 [2024-12-12 06:45:57.355299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:49.919 [2024-12-12 06:45:57.355364] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3761688987579986996 len:13365 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.919 [2024-12-12 06:45:57.355380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:49.919 [2024-12-12 06:45:57.355434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:12514849900987264429 len:513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.919 [2024-12-12 06:45:57.355449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:49.919 [2024-12-12 06:45:57.355507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.919 [2024-12-12 06:45:57.355524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:49.919 #55 NEW cov: 12475 ft: 15938 corp: 37/1510b lim: 100 exec/s: 27 rss: 74Mb L: 89/89 MS: 1 InsertRepeatedBytes- 00:07:49.919 #55 DONE cov: 12475 ft: 15938 corp: 37/1510b lim: 100 exec/s: 27 rss: 74Mb 00:07:49.919 ###### Recommended dictionary. ###### 00:07:49.919 "\001\000\000\000\000\000\000\000" # Uses: 1 00:07:49.919 "\000\002" # Uses: 3 00:07:49.919 "\377\377\377\020" # Uses: 0 00:07:49.919 "\016\000\000\000" # Uses: 0 00:07:49.919 ###### End of recommended dictionary. ###### 00:07:49.919 Done 55 runs in 2 second(s) 00:07:50.178 06:45:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:50.178 06:45:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:50.178 06:45:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:50.178 06:45:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:50.178 00:07:50.178 real 1m3.344s 00:07:50.178 user 1m40.079s 00:07:50.178 sys 0m7.082s 00:07:50.178 06:45:57 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.178 06:45:57 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:50.178 ************************************ 00:07:50.178 END TEST nvmf_llvm_fuzz 00:07:50.178 ************************************ 00:07:50.179 06:45:57 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:50.179 06:45:57 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:50.179 06:45:57 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:50.179 06:45:57 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.179 06:45:57 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.179 06:45:57 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:50.179 ************************************ 00:07:50.179 START TEST vfio_llvm_fuzz 00:07:50.179 ************************************ 00:07:50.179 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:50.179 * Looking for test storage... 00:07:50.179 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:50.179 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:50.179 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:07:50.179 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:50.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.441 --rc genhtml_branch_coverage=1 00:07:50.441 --rc genhtml_function_coverage=1 00:07:50.441 --rc genhtml_legend=1 00:07:50.441 --rc geninfo_all_blocks=1 00:07:50.441 --rc geninfo_unexecuted_blocks=1 00:07:50.441 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:50.441 ' 00:07:50.441 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:50.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.441 --rc genhtml_branch_coverage=1 00:07:50.441 --rc genhtml_function_coverage=1 00:07:50.441 --rc genhtml_legend=1 00:07:50.441 --rc geninfo_all_blocks=1 00:07:50.441 --rc geninfo_unexecuted_blocks=1 00:07:50.441 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:50.441 ' 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:50.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.442 --rc genhtml_branch_coverage=1 00:07:50.442 --rc genhtml_function_coverage=1 00:07:50.442 --rc genhtml_legend=1 00:07:50.442 --rc geninfo_all_blocks=1 00:07:50.442 --rc geninfo_unexecuted_blocks=1 00:07:50.442 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:50.442 ' 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:50.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.442 --rc genhtml_branch_coverage=1 00:07:50.442 --rc genhtml_function_coverage=1 00:07:50.442 --rc genhtml_legend=1 00:07:50.442 --rc geninfo_all_blocks=1 00:07:50.442 --rc geninfo_unexecuted_blocks=1 00:07:50.442 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:50.442 ' 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:50.442 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:50.443 #define SPDK_CONFIG_H 00:07:50.443 #define SPDK_CONFIG_AIO_FSDEV 1 00:07:50.443 #define SPDK_CONFIG_APPS 1 00:07:50.443 #define SPDK_CONFIG_ARCH native 00:07:50.443 #undef SPDK_CONFIG_ASAN 00:07:50.443 #undef SPDK_CONFIG_AVAHI 00:07:50.443 #undef SPDK_CONFIG_CET 00:07:50.443 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:07:50.443 #define SPDK_CONFIG_COVERAGE 1 00:07:50.443 #define SPDK_CONFIG_CROSS_PREFIX 00:07:50.443 #undef SPDK_CONFIG_CRYPTO 00:07:50.443 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:50.443 #undef SPDK_CONFIG_CUSTOMOCF 00:07:50.443 #undef SPDK_CONFIG_DAOS 00:07:50.443 #define SPDK_CONFIG_DAOS_DIR 00:07:50.443 #define SPDK_CONFIG_DEBUG 1 00:07:50.443 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:50.443 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:50.443 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:50.443 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:50.443 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:50.443 #undef SPDK_CONFIG_DPDK_UADK 00:07:50.443 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:50.443 #define SPDK_CONFIG_EXAMPLES 1 00:07:50.443 #undef SPDK_CONFIG_FC 00:07:50.443 #define SPDK_CONFIG_FC_PATH 00:07:50.443 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:50.443 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:50.443 #define SPDK_CONFIG_FSDEV 1 00:07:50.443 #undef SPDK_CONFIG_FUSE 00:07:50.443 #define SPDK_CONFIG_FUZZER 1 00:07:50.443 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:50.443 #undef SPDK_CONFIG_GOLANG 00:07:50.443 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:50.443 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:50.443 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:50.443 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:50.443 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:50.443 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:50.443 #undef SPDK_CONFIG_HAVE_LZ4 00:07:50.443 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:07:50.443 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:07:50.443 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:50.443 #define SPDK_CONFIG_IDXD 1 00:07:50.443 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:50.443 #undef SPDK_CONFIG_IPSEC_MB 00:07:50.443 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:50.443 #define SPDK_CONFIG_ISAL 1 00:07:50.443 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:50.443 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:50.443 #define SPDK_CONFIG_LIBDIR 00:07:50.443 #undef SPDK_CONFIG_LTO 00:07:50.443 #define SPDK_CONFIG_MAX_LCORES 128 00:07:50.443 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:07:50.443 #define SPDK_CONFIG_NVME_CUSE 1 00:07:50.443 #undef SPDK_CONFIG_OCF 00:07:50.443 #define SPDK_CONFIG_OCF_PATH 00:07:50.443 #define SPDK_CONFIG_OPENSSL_PATH 00:07:50.443 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:50.443 #define SPDK_CONFIG_PGO_DIR 00:07:50.443 #undef SPDK_CONFIG_PGO_USE 00:07:50.443 #define SPDK_CONFIG_PREFIX /usr/local 00:07:50.443 #undef SPDK_CONFIG_RAID5F 00:07:50.443 #undef SPDK_CONFIG_RBD 00:07:50.443 #define SPDK_CONFIG_RDMA 1 00:07:50.443 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:50.443 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:50.443 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:50.443 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:50.443 #undef SPDK_CONFIG_SHARED 00:07:50.443 #undef SPDK_CONFIG_SMA 00:07:50.443 #define SPDK_CONFIG_TESTS 1 00:07:50.443 #undef SPDK_CONFIG_TSAN 00:07:50.443 #define SPDK_CONFIG_UBLK 1 00:07:50.443 #define SPDK_CONFIG_UBSAN 1 00:07:50.443 #undef SPDK_CONFIG_UNIT_TESTS 00:07:50.443 #undef SPDK_CONFIG_URING 00:07:50.443 #define SPDK_CONFIG_URING_PATH 00:07:50.443 #undef SPDK_CONFIG_URING_ZNS 00:07:50.443 #undef SPDK_CONFIG_USDT 00:07:50.443 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:50.443 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:50.443 #define SPDK_CONFIG_VFIO_USER 1 00:07:50.443 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:50.443 #define SPDK_CONFIG_VHOST 1 00:07:50.443 #define SPDK_CONFIG_VIRTIO 1 00:07:50.443 #undef SPDK_CONFIG_VTUNE 00:07:50.443 #define SPDK_CONFIG_VTUNE_DIR 00:07:50.443 #define SPDK_CONFIG_WERROR 1 00:07:50.443 #define SPDK_CONFIG_WPDK_DIR 00:07:50.443 #undef SPDK_CONFIG_XNVME 00:07:50.443 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 1 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:50.443 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:50.444 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 1164386 ]] 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 1164386 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.lot9sH 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.lot9sH/tests/vfio /tmp/spdk.lot9sH 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=785162240 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=4499267584 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=54075502592 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730570240 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=7655067648 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:50.445 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30861856768 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865285120 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=12340113408 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346114048 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=6000640 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30865084416 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865285120 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=200704 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=6173044736 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173057024 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:07:50.446 * Looking for test storage... 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=54075502592 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=9869660160 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:50.446 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1698 -- # set -o errtrace 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1703 -- # true 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # xtrace_fd 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:07:50.446 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:50.706 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:50.706 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.706 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.706 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.706 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.706 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.706 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.706 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.706 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.706 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.706 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.706 06:45:57 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:50.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.706 --rc genhtml_branch_coverage=1 00:07:50.706 --rc genhtml_function_coverage=1 00:07:50.706 --rc genhtml_legend=1 00:07:50.706 --rc geninfo_all_blocks=1 00:07:50.706 --rc geninfo_unexecuted_blocks=1 00:07:50.706 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:50.706 ' 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:50.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.706 --rc genhtml_branch_coverage=1 00:07:50.706 --rc genhtml_function_coverage=1 00:07:50.706 --rc genhtml_legend=1 00:07:50.706 --rc geninfo_all_blocks=1 00:07:50.706 --rc geninfo_unexecuted_blocks=1 00:07:50.706 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:50.706 ' 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:50.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.706 --rc genhtml_branch_coverage=1 00:07:50.706 --rc genhtml_function_coverage=1 00:07:50.706 --rc genhtml_legend=1 00:07:50.706 --rc geninfo_all_blocks=1 00:07:50.706 --rc geninfo_unexecuted_blocks=1 00:07:50.706 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:50.706 ' 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:50.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.706 --rc genhtml_branch_coverage=1 00:07:50.706 --rc genhtml_function_coverage=1 00:07:50.706 --rc genhtml_legend=1 00:07:50.706 --rc geninfo_all_blocks=1 00:07:50.706 --rc geninfo_unexecuted_blocks=1 00:07:50.706 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:50.706 ' 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:50.706 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:50.706 06:45:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:50.706 [2024-12-12 06:45:58.077226] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:50.706 [2024-12-12 06:45:58.077308] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1164532 ] 00:07:50.706 [2024-12-12 06:45:58.158232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.706 [2024-12-12 06:45:58.201407] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.966 INFO: Running with entropic power schedule (0xFF, 100). 00:07:50.966 INFO: Seed: 2523695411 00:07:50.966 INFO: Loaded 1 modules (388184 inline 8-bit counters): 388184 [0x2c4b80c, 0x2caa464), 00:07:50.966 INFO: Loaded 1 PC tables (388184 PCs): 388184 [0x2caa468,0x32969e8), 00:07:50.966 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:50.966 INFO: A corpus is not provided, starting from an empty corpus 00:07:50.966 #2 INITED exec/s: 0 rss: 68Mb 00:07:50.966 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:50.966 This may also happen if the target rejected all inputs we tried so far 00:07:50.966 [2024-12-12 06:45:58.440347] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:51.483 NEW_FUNC[1/676]: 0x43b608 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:51.483 NEW_FUNC[2/676]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:51.483 #26 NEW cov: 11251 ft: 11198 corp: 2/7b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 4 InsertRepeatedBytes-CopyPart-CrossOver-InsertByte- 00:07:51.742 #35 NEW cov: 11265 ft: 14501 corp: 3/13b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 4 CMP-EraseBytes-InsertByte-CopyPart- DE: "\001\000\000\000"- 00:07:51.742 NEW_FUNC[1/1]: 0x1c24b08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:51.742 #36 NEW cov: 11282 ft: 15537 corp: 4/19b lim: 6 exec/s: 0 rss: 77Mb L: 6/6 MS: 1 CopyPart- 00:07:52.001 #42 NEW cov: 11282 ft: 16053 corp: 5/25b lim: 6 exec/s: 0 rss: 77Mb L: 6/6 MS: 1 ChangeByte- 00:07:52.260 #48 NEW cov: 11282 ft: 16834 corp: 6/31b lim: 6 exec/s: 48 rss: 78Mb L: 6/6 MS: 1 ChangeByte- 00:07:52.260 #49 NEW cov: 11282 ft: 18027 corp: 7/37b lim: 6 exec/s: 49 rss: 78Mb L: 6/6 MS: 1 CrossOver- 00:07:52.519 #55 NEW cov: 11282 ft: 18180 corp: 8/43b lim: 6 exec/s: 55 rss: 78Mb L: 6/6 MS: 1 CrossOver- 00:07:52.778 #56 NEW cov: 11282 ft: 18227 corp: 9/49b lim: 6 exec/s: 56 rss: 78Mb L: 6/6 MS: 1 CopyPart- 00:07:52.778 #57 NEW cov: 11289 ft: 18334 corp: 10/55b lim: 6 exec/s: 57 rss: 78Mb L: 6/6 MS: 1 ChangeBit- 00:07:53.037 #58 NEW cov: 11289 ft: 18488 corp: 11/61b lim: 6 exec/s: 29 rss: 78Mb L: 6/6 MS: 1 ChangeByte- 00:07:53.037 #58 DONE cov: 11289 ft: 18488 corp: 11/61b lim: 6 exec/s: 29 rss: 78Mb 00:07:53.037 ###### Recommended dictionary. ###### 00:07:53.037 "\001\000\000\000" # Uses: 1 00:07:53.037 ###### End of recommended dictionary. ###### 00:07:53.037 Done 58 runs in 2 second(s) 00:07:53.037 [2024-12-12 06:46:00.484353] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:53.296 06:46:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:53.296 06:46:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:53.296 06:46:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:53.296 06:46:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:53.296 06:46:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:53.296 06:46:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:53.296 06:46:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:53.296 06:46:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:53.296 06:46:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:53.296 06:46:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:53.296 06:46:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:53.296 06:46:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:53.296 06:46:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:53.296 06:46:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:53.296 06:46:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:53.296 06:46:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:53.296 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:53.296 06:46:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:53.296 06:46:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:53.296 06:46:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:53.296 [2024-12-12 06:46:00.753947] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:53.296 [2024-12-12 06:46:00.754033] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1164975 ] 00:07:53.555 [2024-12-12 06:46:00.837806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.555 [2024-12-12 06:46:00.878435] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.555 INFO: Running with entropic power schedule (0xFF, 100). 00:07:53.555 INFO: Seed: 906731744 00:07:53.813 INFO: Loaded 1 modules (388184 inline 8-bit counters): 388184 [0x2c4b80c, 0x2caa464), 00:07:53.813 INFO: Loaded 1 PC tables (388184 PCs): 388184 [0x2caa468,0x32969e8), 00:07:53.813 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:53.813 INFO: A corpus is not provided, starting from an empty corpus 00:07:53.813 #2 INITED exec/s: 0 rss: 66Mb 00:07:53.813 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:53.813 This may also happen if the target rejected all inputs we tried so far 00:07:53.813 [2024-12-12 06:46:01.117719] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:53.813 [2024-12-12 06:46:01.163195] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:53.813 [2024-12-12 06:46:01.163219] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:53.813 [2024-12-12 06:46:01.163237] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:54.072 NEW_FUNC[1/678]: 0x43bba8 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:54.072 NEW_FUNC[2/678]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:54.072 #52 NEW cov: 11247 ft: 11216 corp: 2/5b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 5 InsertByte-InsertByte-CopyPart-ShuffleBytes-CrossOver- 00:07:54.331 [2024-12-12 06:46:01.608518] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:54.331 [2024-12-12 06:46:01.608551] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:54.331 [2024-12-12 06:46:01.608569] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:54.331 #53 NEW cov: 11262 ft: 14319 corp: 3/9b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:54.331 [2024-12-12 06:46:01.770684] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:54.331 [2024-12-12 06:46:01.770707] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:54.331 [2024-12-12 06:46:01.770727] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:54.590 #54 NEW cov: 11262 ft: 14956 corp: 4/13b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 ChangeByte- 00:07:54.590 [2024-12-12 06:46:01.933050] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:54.590 [2024-12-12 06:46:01.933073] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:54.590 [2024-12-12 06:46:01.933091] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:54.590 NEW_FUNC[1/1]: 0x1c24b08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:54.590 #55 NEW cov: 11282 ft: 15116 corp: 5/17b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 ChangeBit- 00:07:54.590 [2024-12-12 06:46:02.098225] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:54.590 [2024-12-12 06:46:02.098247] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:54.590 [2024-12-12 06:46:02.098264] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:54.849 #60 NEW cov: 11282 ft: 15976 corp: 6/21b lim: 4 exec/s: 60 rss: 75Mb L: 4/4 MS: 5 CrossOver-InsertByte-ChangeBit-CrossOver-CrossOver- 00:07:54.849 [2024-12-12 06:46:02.269469] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:54.849 [2024-12-12 06:46:02.269492] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:54.849 [2024-12-12 06:46:02.269508] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:54.849 #61 NEW cov: 11282 ft: 16601 corp: 7/25b lim: 4 exec/s: 61 rss: 75Mb L: 4/4 MS: 1 ChangeByte- 00:07:55.108 [2024-12-12 06:46:02.431560] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:55.108 [2024-12-12 06:46:02.431582] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:55.108 [2024-12-12 06:46:02.431599] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:55.108 #62 NEW cov: 11282 ft: 16613 corp: 8/29b lim: 4 exec/s: 62 rss: 75Mb L: 4/4 MS: 1 CopyPart- 00:07:55.108 [2024-12-12 06:46:02.595313] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:55.108 [2024-12-12 06:46:02.595334] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:55.108 [2024-12-12 06:46:02.595351] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:55.368 #63 NEW cov: 11282 ft: 16626 corp: 9/33b lim: 4 exec/s: 63 rss: 75Mb L: 4/4 MS: 1 ChangeBit- 00:07:55.368 [2024-12-12 06:46:02.760241] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:55.368 [2024-12-12 06:46:02.760264] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:55.368 [2024-12-12 06:46:02.760282] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:55.368 #64 NEW cov: 11282 ft: 17045 corp: 10/37b lim: 4 exec/s: 64 rss: 75Mb L: 4/4 MS: 1 ChangeBit- 00:07:55.628 [2024-12-12 06:46:02.926548] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:55.628 [2024-12-12 06:46:02.926570] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:55.628 [2024-12-12 06:46:02.926587] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:55.628 #70 NEW cov: 11289 ft: 17306 corp: 11/41b lim: 4 exec/s: 70 rss: 75Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:55.628 [2024-12-12 06:46:03.092546] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:55.628 [2024-12-12 06:46:03.092569] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:55.628 [2024-12-12 06:46:03.092585] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:55.887 #71 NEW cov: 11289 ft: 17438 corp: 12/45b lim: 4 exec/s: 35 rss: 75Mb L: 4/4 MS: 1 ChangeBit- 00:07:55.887 #71 DONE cov: 11289 ft: 17438 corp: 12/45b lim: 4 exec/s: 35 rss: 75Mb 00:07:55.887 Done 71 runs in 2 second(s) 00:07:55.887 [2024-12-12 06:46:03.208344] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:56.146 06:46:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:56.146 06:46:03 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:56.146 06:46:03 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:56.146 06:46:03 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:56.146 06:46:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:56.146 06:46:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:56.146 06:46:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:56.146 06:46:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:56.146 06:46:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:56.146 06:46:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:56.146 06:46:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:56.146 06:46:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:56.146 06:46:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:56.146 06:46:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:56.146 06:46:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:56.146 06:46:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:56.146 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:56.146 06:46:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:56.146 06:46:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:56.146 06:46:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:56.146 [2024-12-12 06:46:03.468673] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:56.146 [2024-12-12 06:46:03.468741] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165366 ] 00:07:56.146 [2024-12-12 06:46:03.548779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.147 [2024-12-12 06:46:03.589115] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.406 INFO: Running with entropic power schedule (0xFF, 100). 00:07:56.406 INFO: Seed: 3613736719 00:07:56.406 INFO: Loaded 1 modules (388184 inline 8-bit counters): 388184 [0x2c4b80c, 0x2caa464), 00:07:56.406 INFO: Loaded 1 PC tables (388184 PCs): 388184 [0x2caa468,0x32969e8), 00:07:56.406 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:56.406 INFO: A corpus is not provided, starting from an empty corpus 00:07:56.406 #2 INITED exec/s: 0 rss: 67Mb 00:07:56.406 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:56.406 This may also happen if the target rejected all inputs we tried so far 00:07:56.406 [2024-12-12 06:46:03.825502] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:56.406 [2024-12-12 06:46:03.874874] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:56.923 NEW_FUNC[1/677]: 0x43c598 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:56.923 NEW_FUNC[2/677]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:56.923 #54 NEW cov: 11231 ft: 11194 corp: 2/9b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 2 InsertRepeatedBytes-CopyPart- 00:07:56.923 [2024-12-12 06:46:04.322111] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:56.923 #65 NEW cov: 11248 ft: 14462 corp: 3/17b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:07:57.182 [2024-12-12 06:46:04.501346] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:57.182 NEW_FUNC[1/1]: 0x1c24b08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:57.182 #68 NEW cov: 11265 ft: 15051 corp: 4/25b lim: 8 exec/s: 0 rss: 76Mb L: 8/8 MS: 3 ChangeByte-CopyPart-InsertRepeatedBytes- 00:07:57.182 [2024-12-12 06:46:04.688402] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:57.441 #69 NEW cov: 11265 ft: 16056 corp: 5/33b lim: 8 exec/s: 69 rss: 76Mb L: 8/8 MS: 1 ChangeByte- 00:07:57.441 [2024-12-12 06:46:04.864485] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:57.700 #70 NEW cov: 11265 ft: 16689 corp: 6/41b lim: 8 exec/s: 70 rss: 76Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:57.700 [2024-12-12 06:46:05.036001] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:57.700 #71 NEW cov: 11265 ft: 16773 corp: 7/49b lim: 8 exec/s: 71 rss: 76Mb L: 8/8 MS: 1 CrossOver- 00:07:57.700 [2024-12-12 06:46:05.209418] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:57.959 #72 NEW cov: 11265 ft: 16912 corp: 8/57b lim: 8 exec/s: 72 rss: 76Mb L: 8/8 MS: 1 ChangeByte- 00:07:57.959 [2024-12-12 06:46:05.386672] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:58.218 #78 NEW cov: 11265 ft: 17092 corp: 9/65b lim: 8 exec/s: 78 rss: 76Mb L: 8/8 MS: 1 ChangeBit- 00:07:58.218 [2024-12-12 06:46:05.567186] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:58.218 #84 NEW cov: 11272 ft: 17506 corp: 10/73b lim: 8 exec/s: 84 rss: 77Mb L: 8/8 MS: 1 ChangeBit- 00:07:58.478 [2024-12-12 06:46:05.743397] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:58.478 #85 NEW cov: 11272 ft: 17844 corp: 11/81b lim: 8 exec/s: 42 rss: 77Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:58.478 #85 DONE cov: 11272 ft: 17844 corp: 11/81b lim: 8 exec/s: 42 rss: 77Mb 00:07:58.478 Done 85 runs in 2 second(s) 00:07:58.478 [2024-12-12 06:46:05.868340] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:58.738 06:46:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:58.738 06:46:06 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:58.738 06:46:06 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:58.738 06:46:06 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:58.738 06:46:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:58.738 06:46:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:58.738 06:46:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:58.738 06:46:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:58.738 06:46:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:58.738 06:46:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:58.738 06:46:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:58.738 06:46:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:58.738 06:46:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:58.738 06:46:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:58.738 06:46:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:58.738 06:46:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:58.738 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:58.738 06:46:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:58.738 06:46:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:58.738 06:46:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:58.738 [2024-12-12 06:46:06.127453] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:58.738 [2024-12-12 06:46:06.127520] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165896 ] 00:07:58.738 [2024-12-12 06:46:06.206238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.738 [2024-12-12 06:46:06.245653] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.998 INFO: Running with entropic power schedule (0xFF, 100). 00:07:58.998 INFO: Seed: 1975760311 00:07:58.998 INFO: Loaded 1 modules (388184 inline 8-bit counters): 388184 [0x2c4b80c, 0x2caa464), 00:07:58.998 INFO: Loaded 1 PC tables (388184 PCs): 388184 [0x2caa468,0x32969e8), 00:07:58.998 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:58.998 INFO: A corpus is not provided, starting from an empty corpus 00:07:58.998 #2 INITED exec/s: 0 rss: 67Mb 00:07:58.998 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:58.998 This may also happen if the target rejected all inputs we tried so far 00:07:58.998 [2024-12-12 06:46:06.482443] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:59.516 NEW_FUNC[1/677]: 0x43cc88 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:59.516 NEW_FUNC[2/677]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:59.517 #49 NEW cov: 11237 ft: 10578 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:07:59.517 #50 NEW cov: 11256 ft: 13917 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:07:59.776 #61 NEW cov: 11256 ft: 15515 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:07:59.776 #62 NEW cov: 11256 ft: 15900 corp: 5/129b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:00.035 NEW_FUNC[1/1]: 0x1c24b08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:08:00.035 #63 NEW cov: 11273 ft: 15974 corp: 6/161b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:08:00.035 #64 NEW cov: 11273 ft: 16204 corp: 7/193b lim: 32 exec/s: 64 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:08:00.294 #65 NEW cov: 11273 ft: 16517 corp: 8/225b lim: 32 exec/s: 65 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:00.294 #81 NEW cov: 11273 ft: 16981 corp: 9/257b lim: 32 exec/s: 81 rss: 76Mb L: 32/32 MS: 1 CrossOver- 00:08:00.553 #82 NEW cov: 11273 ft: 17732 corp: 10/289b lim: 32 exec/s: 82 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:08:00.553 #83 NEW cov: 11273 ft: 18019 corp: 11/321b lim: 32 exec/s: 83 rss: 76Mb L: 32/32 MS: 1 CrossOver- 00:08:00.812 #84 NEW cov: 11273 ft: 18156 corp: 12/353b lim: 32 exec/s: 84 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:08:01.070 #85 NEW cov: 11280 ft: 18212 corp: 13/385b lim: 32 exec/s: 85 rss: 76Mb L: 32/32 MS: 1 CrossOver- 00:08:01.070 #91 NEW cov: 11280 ft: 18412 corp: 14/417b lim: 32 exec/s: 45 rss: 76Mb L: 32/32 MS: 1 CrossOver- 00:08:01.070 #91 DONE cov: 11280 ft: 18412 corp: 14/417b lim: 32 exec/s: 45 rss: 76Mb 00:08:01.070 Done 91 runs in 2 second(s) 00:08:01.070 [2024-12-12 06:46:08.573343] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:08:01.330 06:46:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:08:01.330 06:46:08 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:01.330 06:46:08 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:01.330 06:46:08 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:01.330 06:46:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:01.330 06:46:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:01.330 06:46:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:01.330 06:46:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:01.330 06:46:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:01.330 06:46:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:01.330 06:46:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:01.330 06:46:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:01.330 06:46:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:01.330 06:46:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:01.330 06:46:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:01.330 06:46:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:01.330 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:01.330 06:46:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:01.330 06:46:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:01.330 06:46:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:01.330 [2024-12-12 06:46:08.837037] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:08:01.330 [2024-12-12 06:46:08.837105] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166435 ] 00:08:01.590 [2024-12-12 06:46:08.916072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.590 [2024-12-12 06:46:08.955533] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.849 INFO: Running with entropic power schedule (0xFF, 100). 00:08:01.849 INFO: Seed: 390794850 00:08:01.849 INFO: Loaded 1 modules (388184 inline 8-bit counters): 388184 [0x2c4b80c, 0x2caa464), 00:08:01.849 INFO: Loaded 1 PC tables (388184 PCs): 388184 [0x2caa468,0x32969e8), 00:08:01.849 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:01.849 INFO: A corpus is not provided, starting from an empty corpus 00:08:01.849 #2 INITED exec/s: 0 rss: 67Mb 00:08:01.849 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:01.849 This may also happen if the target rejected all inputs we tried so far 00:08:01.849 [2024-12-12 06:46:09.191187] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:08:02.109 NEW_FUNC[1/677]: 0x43d508 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:08:02.109 NEW_FUNC[2/677]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:02.109 #264 NEW cov: 11244 ft: 11086 corp: 2/33b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:08:02.368 #265 NEW cov: 11258 ft: 13828 corp: 3/65b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:08:02.627 NEW_FUNC[1/1]: 0x1c24b08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:08:02.627 #266 NEW cov: 11275 ft: 15546 corp: 4/97b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:02.627 #272 NEW cov: 11275 ft: 16041 corp: 5/129b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:08:02.886 #273 NEW cov: 11275 ft: 16173 corp: 6/161b lim: 32 exec/s: 273 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:08:03.145 #284 NEW cov: 11275 ft: 16481 corp: 7/193b lim: 32 exec/s: 284 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:08:03.405 #285 NEW cov: 11275 ft: 16533 corp: 8/225b lim: 32 exec/s: 285 rss: 75Mb L: 32/32 MS: 1 CrossOver- 00:08:03.405 #286 NEW cov: 11275 ft: 16723 corp: 9/257b lim: 32 exec/s: 286 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:03.664 #287 NEW cov: 11282 ft: 17020 corp: 10/289b lim: 32 exec/s: 287 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:03.923 #288 NEW cov: 11282 ft: 17049 corp: 11/321b lim: 32 exec/s: 144 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:08:03.923 #288 DONE cov: 11282 ft: 17049 corp: 11/321b lim: 32 exec/s: 144 rss: 76Mb 00:08:03.923 Done 288 runs in 2 second(s) 00:08:03.923 [2024-12-12 06:46:11.221349] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:08:03.923 06:46:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:08:04.183 06:46:11 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:04.183 06:46:11 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:04.183 06:46:11 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:04.183 06:46:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:08:04.183 06:46:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:04.183 06:46:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:04.183 06:46:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:04.183 06:46:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:08:04.183 06:46:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:08:04.183 06:46:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:08:04.183 06:46:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:08:04.183 06:46:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:04.183 06:46:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:04.183 06:46:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:04.183 06:46:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:08:04.183 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:04.183 06:46:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:04.183 06:46:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:04.183 06:46:11 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:08:04.183 [2024-12-12 06:46:11.487614] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:08:04.183 [2024-12-12 06:46:11.487685] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166857 ] 00:08:04.183 [2024-12-12 06:46:11.567850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.183 [2024-12-12 06:46:11.610638] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.443 INFO: Running with entropic power schedule (0xFF, 100). 00:08:04.443 INFO: Seed: 3048794995 00:08:04.443 INFO: Loaded 1 modules (388184 inline 8-bit counters): 388184 [0x2c4b80c, 0x2caa464), 00:08:04.443 INFO: Loaded 1 PC tables (388184 PCs): 388184 [0x2caa468,0x32969e8), 00:08:04.443 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:04.443 INFO: A corpus is not provided, starting from an empty corpus 00:08:04.443 #2 INITED exec/s: 0 rss: 67Mb 00:08:04.443 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:04.443 This may also happen if the target rejected all inputs we tried so far 00:08:04.443 [2024-12-12 06:46:11.850744] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:08:04.443 [2024-12-12 06:46:11.894198] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:04.443 [2024-12-12 06:46:11.894233] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:04.961 NEW_FUNC[1/674]: 0x43df08 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:08:04.961 NEW_FUNC[2/674]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:04.961 #29 NEW cov: 11235 ft: 11176 corp: 2/14b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:04.962 [2024-12-12 06:46:12.341747] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:04.962 [2024-12-12 06:46:12.341788] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:04.962 #45 NEW cov: 11249 ft: 13888 corp: 3/27b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ChangeBit- 00:08:05.285 [2024-12-12 06:46:12.508107] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:05.285 [2024-12-12 06:46:12.508139] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:05.285 #46 NEW cov: 11249 ft: 14565 corp: 4/40b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 CopyPart- 00:08:05.285 [2024-12-12 06:46:12.672156] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:05.285 [2024-12-12 06:46:12.672188] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:05.285 NEW_FUNC[1/1]: 0x1c24b08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:08:05.285 #57 NEW cov: 11266 ft: 15053 corp: 5/53b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:05.544 [2024-12-12 06:46:12.838380] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:05.544 [2024-12-12 06:46:12.838411] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:05.544 #66 NEW cov: 11266 ft: 15203 corp: 6/66b lim: 13 exec/s: 66 rss: 76Mb L: 13/13 MS: 4 ChangeBit-CopyPart-EraseBytes-InsertRepeatedBytes- 00:08:05.544 [2024-12-12 06:46:13.012873] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:05.544 [2024-12-12 06:46:13.012903] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:05.802 #67 NEW cov: 11269 ft: 15241 corp: 7/79b lim: 13 exec/s: 67 rss: 76Mb L: 13/13 MS: 1 CopyPart- 00:08:05.802 [2024-12-12 06:46:13.176885] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:05.802 [2024-12-12 06:46:13.176915] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:05.802 #71 NEW cov: 11269 ft: 15275 corp: 8/92b lim: 13 exec/s: 71 rss: 76Mb L: 13/13 MS: 4 CrossOver-ChangeBinInt-InsertByte-CrossOver- 00:08:06.060 [2024-12-12 06:46:13.342582] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:06.060 [2024-12-12 06:46:13.342617] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:06.060 #72 NEW cov: 11269 ft: 16658 corp: 9/105b lim: 13 exec/s: 72 rss: 76Mb L: 13/13 MS: 1 ChangeBit- 00:08:06.060 [2024-12-12 06:46:13.511679] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:06.060 [2024-12-12 06:46:13.511709] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:06.319 #83 NEW cov: 11269 ft: 16916 corp: 10/118b lim: 13 exec/s: 83 rss: 76Mb L: 13/13 MS: 1 ChangeByte- 00:08:06.319 [2024-12-12 06:46:13.684245] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:06.319 [2024-12-12 06:46:13.684273] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:06.319 #84 NEW cov: 11276 ft: 16995 corp: 11/131b lim: 13 exec/s: 84 rss: 76Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:06.578 [2024-12-12 06:46:13.847120] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:06.578 [2024-12-12 06:46:13.847152] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:06.578 #85 NEW cov: 11276 ft: 17380 corp: 12/144b lim: 13 exec/s: 42 rss: 76Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:06.578 #85 DONE cov: 11276 ft: 17380 corp: 12/144b lim: 13 exec/s: 42 rss: 76Mb 00:08:06.578 Done 85 runs in 2 second(s) 00:08:06.578 [2024-12-12 06:46:13.962344] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:08:06.837 06:46:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:08:06.837 06:46:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:06.837 06:46:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:06.837 06:46:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:06.837 06:46:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:08:06.837 06:46:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:06.837 06:46:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:06.837 06:46:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:06.837 06:46:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:08:06.837 06:46:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:08:06.837 06:46:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:08:06.837 06:46:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:08:06.837 06:46:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:06.837 06:46:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:06.837 06:46:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:06.837 06:46:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:08:06.837 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:06.837 06:46:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:06.837 06:46:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:06.837 06:46:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:08:06.837 [2024-12-12 06:46:14.227358] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:08:06.837 [2024-12-12 06:46:14.227428] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1167265 ] 00:08:06.837 [2024-12-12 06:46:14.309144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.837 [2024-12-12 06:46:14.349707] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.096 INFO: Running with entropic power schedule (0xFF, 100). 00:08:07.096 INFO: Seed: 1495832934 00:08:07.096 INFO: Loaded 1 modules (388184 inline 8-bit counters): 388184 [0x2c4b80c, 0x2caa464), 00:08:07.096 INFO: Loaded 1 PC tables (388184 PCs): 388184 [0x2caa468,0x32969e8), 00:08:07.096 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:07.096 INFO: A corpus is not provided, starting from an empty corpus 00:08:07.096 #2 INITED exec/s: 0 rss: 67Mb 00:08:07.096 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:07.096 This may also happen if the target rejected all inputs we tried so far 00:08:07.096 [2024-12-12 06:46:14.600473] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:08:07.355 [2024-12-12 06:46:14.644208] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:07.355 [2024-12-12 06:46:14.644241] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:07.614 NEW_FUNC[1/678]: 0x43ebf8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:08:07.614 NEW_FUNC[2/678]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:07.614 #8 NEW cov: 11205 ft: 11189 corp: 2/10b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:08:07.614 [2024-12-12 06:46:15.089639] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:07.614 [2024-12-12 06:46:15.089678] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:07.873 #9 NEW cov: 11255 ft: 14033 corp: 3/19b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 CrossOver- 00:08:07.873 [2024-12-12 06:46:15.252766] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:07.873 [2024-12-12 06:46:15.252797] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:07.873 #20 NEW cov: 11255 ft: 14900 corp: 4/28b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 1 CopyPart- 00:08:08.133 [2024-12-12 06:46:15.414801] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:08.133 [2024-12-12 06:46:15.414832] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:08.133 NEW_FUNC[1/1]: 0x1c24b08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:08:08.133 #21 NEW cov: 11272 ft: 15502 corp: 5/37b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 1 CMP- DE: "\000\000\000\200"- 00:08:08.133 [2024-12-12 06:46:15.579022] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:08.133 [2024-12-12 06:46:15.579051] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:08.393 #22 NEW cov: 11272 ft: 15866 corp: 6/46b lim: 9 exec/s: 22 rss: 76Mb L: 9/9 MS: 1 ChangeBit- 00:08:08.393 [2024-12-12 06:46:15.745494] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:08.393 [2024-12-12 06:46:15.745523] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:08.393 #28 NEW cov: 11272 ft: 15956 corp: 7/55b lim: 9 exec/s: 28 rss: 76Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:08.393 [2024-12-12 06:46:15.907463] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:08.393 [2024-12-12 06:46:15.907493] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:08.653 #29 NEW cov: 11272 ft: 16376 corp: 8/64b lim: 9 exec/s: 29 rss: 76Mb L: 9/9 MS: 1 ChangeBit- 00:08:08.653 [2024-12-12 06:46:16.069145] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:08.653 [2024-12-12 06:46:16.069187] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:08.653 #30 NEW cov: 11272 ft: 16645 corp: 9/73b lim: 9 exec/s: 30 rss: 76Mb L: 9/9 MS: 1 PersAutoDict- DE: "\000\000\000\200"- 00:08:08.912 [2024-12-12 06:46:16.230861] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:08.912 [2024-12-12 06:46:16.230893] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:08.912 #31 NEW cov: 11272 ft: 16933 corp: 10/82b lim: 9 exec/s: 31 rss: 76Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:08.912 [2024-12-12 06:46:16.392610] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:08.912 [2024-12-12 06:46:16.392640] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:09.171 #32 NEW cov: 11279 ft: 16961 corp: 11/91b lim: 9 exec/s: 32 rss: 76Mb L: 9/9 MS: 1 CopyPart- 00:08:09.171 [2024-12-12 06:46:16.556878] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:09.171 [2024-12-12 06:46:16.556907] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:09.171 #33 NEW cov: 11279 ft: 16982 corp: 12/100b lim: 9 exec/s: 16 rss: 76Mb L: 9/9 MS: 1 ChangeByte- 00:08:09.171 #33 DONE cov: 11279 ft: 16982 corp: 12/100b lim: 9 exec/s: 16 rss: 76Mb 00:08:09.171 ###### Recommended dictionary. ###### 00:08:09.171 "\000\000\000\200" # Uses: 1 00:08:09.171 ###### End of recommended dictionary. ###### 00:08:09.171 Done 33 runs in 2 second(s) 00:08:09.171 [2024-12-12 06:46:16.673344] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:08:09.430 06:46:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:08:09.430 06:46:16 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:09.430 06:46:16 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:09.430 06:46:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:08:09.430 00:08:09.430 real 0m19.319s 00:08:09.430 user 0m27.123s 00:08:09.430 sys 0m1.824s 00:08:09.430 06:46:16 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.430 06:46:16 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:09.430 ************************************ 00:08:09.430 END TEST vfio_llvm_fuzz 00:08:09.430 ************************************ 00:08:09.430 00:08:09.430 real 1m23.000s 00:08:09.430 user 2m7.366s 00:08:09.430 sys 0m9.106s 00:08:09.430 06:46:16 llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.430 06:46:16 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:09.430 ************************************ 00:08:09.430 END TEST llvm_fuzz 00:08:09.430 ************************************ 00:08:09.690 06:46:16 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:08:09.690 06:46:16 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:08:09.690 06:46:16 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:08:09.690 06:46:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:09.690 06:46:16 -- common/autotest_common.sh@10 -- # set +x 00:08:09.690 06:46:16 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:08:09.690 06:46:16 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:08:09.690 06:46:16 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:08:09.690 06:46:16 -- common/autotest_common.sh@10 -- # set +x 00:08:16.274 INFO: APP EXITING 00:08:16.274 INFO: killing all VMs 00:08:16.274 INFO: killing vhost app 00:08:16.274 INFO: EXIT DONE 00:08:18.815 Waiting for block devices as requested 00:08:18.815 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:18.815 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:19.076 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:19.076 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:19.076 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:19.076 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:19.336 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:19.336 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:19.336 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:19.595 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:19.595 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:19.595 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:19.859 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:19.859 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:19.859 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:20.120 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:20.120 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:08:24.318 Cleaning 00:08:24.318 Removing: /dev/shm/spdk_tgt_trace.pid1139533 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1137073 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1138213 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1139533 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1139999 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1141081 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1141098 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1142207 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1142219 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1142656 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1142976 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1143303 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1143643 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1143795 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1144012 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1144296 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1144616 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1145422 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1148421 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1148666 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1148955 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1148959 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1149526 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1149533 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1150096 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1150105 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1150403 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1150409 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1150702 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1150710 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1151345 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1151573 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1151712 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1151993 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1152573 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1153034 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1153577 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1153995 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1154738 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1155482 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1155785 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1156314 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1156749 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1157133 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1157673 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1158086 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1158493 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1159023 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1159341 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1159849 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1160378 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1160670 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1161195 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1161687 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1162021 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1162548 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1162966 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1163372 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1163912 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1164532 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1164975 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1165366 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1165896 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1166435 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1166857 00:08:24.318 Removing: /var/run/dpdk/spdk_pid1167265 00:08:24.318 Clean 00:08:24.318 06:46:31 -- common/autotest_common.sh@1453 -- # return 0 00:08:24.318 06:46:31 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:08:24.318 06:46:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:24.318 06:46:31 -- common/autotest_common.sh@10 -- # set +x 00:08:24.318 06:46:31 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:08:24.318 06:46:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:24.318 06:46:31 -- common/autotest_common.sh@10 -- # set +x 00:08:24.318 06:46:31 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:24.318 06:46:31 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:24.318 06:46:31 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:24.318 06:46:31 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:08:24.318 06:46:31 -- spdk/autotest.sh@398 -- # hostname 00:08:24.318 06:46:31 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-20 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:08:24.318 geninfo: WARNING: invalid characters removed from testname! 00:08:26.857 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:08:32.140 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:08:34.678 06:46:41 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:42.800 06:46:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:48.082 06:46:54 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:53.423 06:46:59 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:58.697 06:47:05 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:03.970 06:47:10 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:09.243 06:47:15 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:09:09.243 06:47:15 -- spdk/autorun.sh@1 -- $ timing_finish 00:09:09.243 06:47:15 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt ]] 00:09:09.243 06:47:15 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:09:09.243 06:47:15 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:09:09.243 06:47:15 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:09.243 + [[ -n 1026415 ]] 00:09:09.243 + sudo kill 1026415 00:09:09.253 [Pipeline] } 00:09:09.268 [Pipeline] // stage 00:09:09.273 [Pipeline] } 00:09:09.287 [Pipeline] // timeout 00:09:09.293 [Pipeline] } 00:09:09.307 [Pipeline] // catchError 00:09:09.312 [Pipeline] } 00:09:09.327 [Pipeline] // wrap 00:09:09.333 [Pipeline] } 00:09:09.347 [Pipeline] // catchError 00:09:09.357 [Pipeline] stage 00:09:09.359 [Pipeline] { (Epilogue) 00:09:09.372 [Pipeline] catchError 00:09:09.374 [Pipeline] { 00:09:09.388 [Pipeline] echo 00:09:09.390 Cleanup processes 00:09:09.396 [Pipeline] sh 00:09:09.684 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:09.684 1175894 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:09.698 [Pipeline] sh 00:09:09.984 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:09.984 ++ grep -v 'sudo pgrep' 00:09:09.984 ++ awk '{print $1}' 00:09:09.984 + sudo kill -9 00:09:09.984 + true 00:09:09.996 [Pipeline] sh 00:09:10.281 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:09:10.281 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:10.281 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:11.660 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:21.654 [Pipeline] sh 00:09:21.940 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:09:21.940 Artifacts sizes are good 00:09:21.954 [Pipeline] archiveArtifacts 00:09:21.962 Archiving artifacts 00:09:22.124 [Pipeline] sh 00:09:22.443 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:09:22.459 [Pipeline] cleanWs 00:09:22.469 [WS-CLEANUP] Deleting project workspace... 00:09:22.469 [WS-CLEANUP] Deferred wipeout is used... 00:09:22.476 [WS-CLEANUP] done 00:09:22.478 [Pipeline] } 00:09:22.495 [Pipeline] // catchError 00:09:22.507 [Pipeline] sh 00:09:22.793 + logger -p user.info -t JENKINS-CI 00:09:22.802 [Pipeline] } 00:09:22.817 [Pipeline] // stage 00:09:22.824 [Pipeline] } 00:09:22.838 [Pipeline] // node 00:09:22.843 [Pipeline] End of Pipeline 00:09:22.886 Finished: SUCCESS