00:00:00.000 Started by upstream project "autotest-per-patch" build number 132579 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.018 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.019 The recommended git tool is: git 00:00:00.019 using credential 00000000-0000-0000-0000-000000000002 00:00:00.021 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.068 Fetching changes from the remote Git repository 00:00:00.071 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.108 Using shallow fetch with depth 1 00:00:00.108 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.108 > git --version # timeout=10 00:00:00.140 > git --version # 'git version 2.39.2' 00:00:00.140 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.157 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.157 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.191 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.201 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.214 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.214 > git config core.sparsecheckout # timeout=10 00:00:07.224 > git read-tree -mu HEAD # timeout=10 00:00:07.240 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.269 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.269 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.377 [Pipeline] Start of Pipeline 00:00:07.392 [Pipeline] library 00:00:07.395 Loading library shm_lib@master 00:00:07.395 Library shm_lib@master is cached. Copying from home. 00:00:07.409 [Pipeline] node 00:00:07.423 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:07.425 [Pipeline] { 00:00:07.433 [Pipeline] catchError 00:00:07.435 [Pipeline] { 00:00:07.444 [Pipeline] wrap 00:00:07.452 [Pipeline] { 00:00:07.460 [Pipeline] stage 00:00:07.462 [Pipeline] { (Prologue) 00:00:07.749 [Pipeline] sh 00:00:08.034 + logger -p user.info -t JENKINS-CI 00:00:08.055 [Pipeline] echo 00:00:08.057 Node: WFP20 00:00:08.065 [Pipeline] sh 00:00:08.364 [Pipeline] setCustomBuildProperty 00:00:08.375 [Pipeline] echo 00:00:08.377 Cleanup processes 00:00:08.382 [Pipeline] sh 00:00:08.665 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:08.665 2236163 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:08.679 [Pipeline] sh 00:00:08.963 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:08.963 ++ grep -v 'sudo pgrep' 00:00:08.963 ++ awk '{print $1}' 00:00:08.963 + sudo kill -9 00:00:08.963 + true 00:00:08.981 [Pipeline] cleanWs 00:00:08.993 [WS-CLEANUP] Deleting project workspace... 00:00:08.993 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.000 [WS-CLEANUP] done 00:00:09.005 [Pipeline] setCustomBuildProperty 00:00:09.017 [Pipeline] sh 00:00:09.296 + sudo git config --global --replace-all safe.directory '*' 00:00:09.362 [Pipeline] httpRequest 00:00:12.393 [Pipeline] echo 00:00:12.394 Sorcerer 10.211.164.101 is dead 00:00:12.401 [Pipeline] httpRequest 00:00:15.425 [Pipeline] echo 00:00:15.426 Sorcerer 10.211.164.101 is dead 00:00:15.431 [Pipeline] httpRequest 00:00:15.490 [Pipeline] echo 00:00:15.491 Sorcerer 10.211.164.96 is dead 00:00:15.496 [Pipeline] httpRequest 00:00:16.027 [Pipeline] echo 00:00:16.028 Sorcerer 10.211.164.20 is alive 00:00:16.034 [Pipeline] retry 00:00:16.036 [Pipeline] { 00:00:16.043 [Pipeline] httpRequest 00:00:16.046 HttpMethod: GET 00:00:16.047 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.047 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.048 Response Code: HTTP/1.1 200 OK 00:00:16.048 Success: Status code 200 is in the accepted range: 200,404 00:00:16.049 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.298 [Pipeline] } 00:00:16.308 [Pipeline] // retry 00:00:16.315 [Pipeline] sh 00:00:16.602 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.618 [Pipeline] httpRequest 00:00:16.942 [Pipeline] echo 00:00:16.944 Sorcerer 10.211.164.20 is alive 00:00:16.951 [Pipeline] retry 00:00:16.953 [Pipeline] { 00:00:16.966 [Pipeline] httpRequest 00:00:16.969 HttpMethod: GET 00:00:16.970 URL: http://10.211.164.20/packages/spdk_2e10c84c822790902c20cbe1ae21fdaeff91a220.tar.gz 00:00:16.970 Sending request to url: http://10.211.164.20/packages/spdk_2e10c84c822790902c20cbe1ae21fdaeff91a220.tar.gz 00:00:16.971 Response Code: HTTP/1.1 404 Not Found 00:00:16.971 Success: Status code 404 is in the accepted range: 200,404 00:00:16.972 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_2e10c84c822790902c20cbe1ae21fdaeff91a220.tar.gz 00:00:16.977 [Pipeline] } 00:00:16.988 [Pipeline] // retry 00:00:16.993 [Pipeline] sh 00:00:17.271 + rm -f spdk_2e10c84c822790902c20cbe1ae21fdaeff91a220.tar.gz 00:00:17.282 [Pipeline] retry 00:00:17.284 [Pipeline] { 00:00:17.299 [Pipeline] checkout 00:00:17.306 The recommended git tool is: NONE 00:00:17.327 using credential 00000000-0000-0000-0000-000000000002 00:00:17.329 Wiping out workspace first. 00:00:17.337 Cloning the remote Git repository 00:00:17.340 Honoring refspec on initial clone 00:00:17.345 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:00:17.346 > git init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk # timeout=10 00:00:17.355 Using reference repository: /var/ci_repos/spdk_multi 00:00:17.355 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:00:17.355 > git --version # timeout=10 00:00:17.364 > git --version # 'git version 2.45.2' 00:00:17.365 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:17.371 Setting http proxy: proxy-dmz.intel.com:911 00:00:17.371 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/38/25438/7 +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:57.109 Avoid second fetch 00:00:57.127 Checking out Revision 2e10c84c822790902c20cbe1ae21fdaeff91a220 (FETCH_HEAD) 00:00:57.327 Commit message: "nvmf: Expose DIF type of namespace to host again" 00:00:57.338 First time build. Skipping changelog. 00:00:57.084 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:00:57.090 > git config --add remote.origin.fetch refs/changes/38/25438/7 # timeout=10 00:00:57.096 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:57.111 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:57.118 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:57.129 > git config core.sparsecheckout # timeout=10 00:00:57.135 > git checkout -f 2e10c84c822790902c20cbe1ae21fdaeff91a220 # timeout=10 00:00:57.329 > git rev-list --no-walk a9e1e4309cdc83028f205f483fd163a9ff0da22f # timeout=10 00:00:57.343 > git remote # timeout=10 00:00:57.349 > git submodule init # timeout=10 00:00:57.437 > git submodule sync # timeout=10 00:00:57.523 > git config --get remote.origin.url # timeout=10 00:00:57.533 > git submodule init # timeout=10 00:00:57.620 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:00:57.627 > git config --get submodule.dpdk.url # timeout=10 00:00:57.632 > git remote # timeout=10 00:00:57.639 > git config --get remote.origin.url # timeout=10 00:00:57.645 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:00:57.651 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:00:57.657 > git remote # timeout=10 00:00:57.663 > git config --get remote.origin.url # timeout=10 00:00:57.669 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:00:57.675 > git config --get submodule.isa-l.url # timeout=10 00:00:57.681 > git remote # timeout=10 00:00:57.687 > git config --get remote.origin.url # timeout=10 00:00:57.693 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:00:57.699 > git config --get submodule.ocf.url # timeout=10 00:00:57.705 > git remote # timeout=10 00:00:57.711 > git config --get remote.origin.url # timeout=10 00:00:57.718 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:00:57.723 > git config --get submodule.libvfio-user.url # timeout=10 00:00:57.729 > git remote # timeout=10 00:00:57.732 > git config --get remote.origin.url # timeout=10 00:00:57.738 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:00:57.744 > git config --get submodule.xnvme.url # timeout=10 00:00:57.750 > git remote # timeout=10 00:00:57.753 > git config --get remote.origin.url # timeout=10 00:00:57.759 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:00:57.765 > git config --get submodule.isa-l-crypto.url # timeout=10 00:00:57.771 > git remote # timeout=10 00:00:57.777 > git config --get remote.origin.url # timeout=10 00:00:57.783 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:00:57.789 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:57.790 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:57.790 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:57.790 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:57.790 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:57.791 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:57.791 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:57.793 Setting http proxy: proxy-dmz.intel.com:911 00:00:57.793 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:00:57.796 Setting http proxy: proxy-dmz.intel.com:911 00:00:57.796 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:00:57.796 Setting http proxy: proxy-dmz.intel.com:911 00:00:57.797 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:00:57.797 Setting http proxy: proxy-dmz.intel.com:911 00:00:57.797 Setting http proxy: proxy-dmz.intel.com:911 00:00:57.797 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:00:57.797 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:00:57.797 Setting http proxy: proxy-dmz.intel.com:911 00:00:57.797 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:00:57.797 Setting http proxy: proxy-dmz.intel.com:911 00:00:57.798 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:01:05.311 [Pipeline] dir 00:01:05.312 Running in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:05.314 [Pipeline] { 00:01:05.331 [Pipeline] sh 00:01:05.622 ++ nproc 00:01:05.623 + threads=112 00:01:05.623 + git repack -a -d --threads=112 00:01:10.899 + git submodule foreach git repack -a -d --threads=112 00:01:10.899 Entering 'dpdk' 00:01:14.190 Entering 'intel-ipsec-mb' 00:01:14.756 Entering 'isa-l' 00:01:14.756 Entering 'isa-l-crypto' 00:01:15.016 Entering 'libvfio-user' 00:01:15.016 Entering 'ocf' 00:01:15.275 Entering 'xnvme' 00:01:15.842 + find .git -type f -name alternates -print -delete 00:01:15.842 .git/objects/info/alternates 00:01:15.842 .git/modules/xnvme/objects/info/alternates 00:01:15.842 .git/modules/libvfio-user/objects/info/alternates 00:01:15.842 .git/modules/ocf/objects/info/alternates 00:01:15.842 .git/modules/isa-l-crypto/objects/info/alternates 00:01:15.842 .git/modules/intel-ipsec-mb/objects/info/alternates 00:01:15.842 .git/modules/dpdk/objects/info/alternates 00:01:15.842 .git/modules/isa-l/objects/info/alternates 00:01:15.850 [Pipeline] } 00:01:15.864 [Pipeline] // dir 00:01:15.869 [Pipeline] } 00:01:15.878 [Pipeline] // retry 00:01:15.884 [Pipeline] sh 00:01:16.164 + hash pigz 00:01:16.164 + tar -cf spdk_2e10c84c822790902c20cbe1ae21fdaeff91a220.tar.gz -I pigz spdk 00:01:16.744 [Pipeline] retry 00:01:16.746 [Pipeline] { 00:01:16.759 [Pipeline] httpRequest 00:01:16.766 HttpMethod: PUT 00:01:16.766 URL: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_2e10c84c822790902c20cbe1ae21fdaeff91a220.tar.gz 00:01:16.769 Sending request to url: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_2e10c84c822790902c20cbe1ae21fdaeff91a220.tar.gz 00:01:19.499 Response Code: HTTP/1.1 200 OK 00:01:19.506 Success: Status code 200 is in the accepted range: 200 00:01:19.508 [Pipeline] } 00:01:19.525 [Pipeline] // retry 00:01:19.532 [Pipeline] echo 00:01:19.533 00:01:19.533 Locking 00:01:19.533 Waited 0s for lock 00:01:19.533 Everything Fine. Saved: /storage/packages/spdk_2e10c84c822790902c20cbe1ae21fdaeff91a220.tar.gz 00:01:19.533 00:01:19.537 [Pipeline] sh 00:01:19.822 + git -C spdk log --oneline -n5 00:01:19.822 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:01:19.822 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:01:19.822 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:01:19.822 5592070b3 doc: update nvmf_tracing.md 00:01:19.822 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:01:19.834 [Pipeline] } 00:01:19.843 [Pipeline] // stage 00:01:19.851 [Pipeline] stage 00:01:19.853 [Pipeline] { (Prepare) 00:01:19.864 [Pipeline] writeFile 00:01:19.874 [Pipeline] sh 00:01:20.157 + logger -p user.info -t JENKINS-CI 00:01:20.171 [Pipeline] sh 00:01:20.454 + logger -p user.info -t JENKINS-CI 00:01:20.467 [Pipeline] sh 00:01:20.750 + cat autorun-spdk.conf 00:01:20.750 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.750 SPDK_TEST_FUZZER_SHORT=1 00:01:20.750 SPDK_TEST_FUZZER=1 00:01:20.750 SPDK_TEST_SETUP=1 00:01:20.750 SPDK_RUN_UBSAN=1 00:01:20.757 RUN_NIGHTLY=0 00:01:20.761 [Pipeline] readFile 00:01:20.782 [Pipeline] withEnv 00:01:20.785 [Pipeline] { 00:01:20.793 [Pipeline] sh 00:01:21.073 + set -ex 00:01:21.073 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:21.073 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:21.073 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.073 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:21.073 ++ SPDK_TEST_FUZZER=1 00:01:21.073 ++ SPDK_TEST_SETUP=1 00:01:21.073 ++ SPDK_RUN_UBSAN=1 00:01:21.073 ++ RUN_NIGHTLY=0 00:01:21.073 + case $SPDK_TEST_NVMF_NICS in 00:01:21.073 + DRIVERS= 00:01:21.073 + [[ -n '' ]] 00:01:21.073 + exit 0 00:01:21.083 [Pipeline] } 00:01:21.097 [Pipeline] // withEnv 00:01:21.103 [Pipeline] } 00:01:21.117 [Pipeline] // stage 00:01:21.129 [Pipeline] catchError 00:01:21.132 [Pipeline] { 00:01:21.147 [Pipeline] timeout 00:01:21.147 Timeout set to expire in 30 min 00:01:21.148 [Pipeline] { 00:01:21.163 [Pipeline] stage 00:01:21.165 [Pipeline] { (Tests) 00:01:21.180 [Pipeline] sh 00:01:21.466 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:21.466 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:21.466 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:21.466 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:21.466 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:21.466 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:21.466 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:21.466 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:21.466 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:21.466 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:21.466 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:21.466 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:21.466 + source /etc/os-release 00:01:21.466 ++ NAME='Fedora Linux' 00:01:21.466 ++ VERSION='39 (Cloud Edition)' 00:01:21.466 ++ ID=fedora 00:01:21.466 ++ VERSION_ID=39 00:01:21.466 ++ VERSION_CODENAME= 00:01:21.466 ++ PLATFORM_ID=platform:f39 00:01:21.466 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:21.466 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:21.466 ++ LOGO=fedora-logo-icon 00:01:21.466 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:21.466 ++ HOME_URL=https://fedoraproject.org/ 00:01:21.466 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:21.466 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:21.466 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:21.466 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:21.466 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:21.466 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:21.466 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:21.466 ++ SUPPORT_END=2024-11-12 00:01:21.466 ++ VARIANT='Cloud Edition' 00:01:21.466 ++ VARIANT_ID=cloud 00:01:21.466 + uname -a 00:01:21.466 Linux spdk-wfp-20 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:21.466 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:24.805 Hugepages 00:01:24.805 node hugesize free / total 00:01:24.805 node0 1048576kB 0 / 0 00:01:24.805 node0 2048kB 0 / 0 00:01:24.805 node1 1048576kB 0 / 0 00:01:24.805 node1 2048kB 0 / 0 00:01:24.805 00:01:24.805 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:24.805 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:24.805 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:24.805 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:24.805 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:24.805 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:24.805 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:24.805 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:24.805 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:24.805 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:24.805 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:24.806 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:24.806 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:24.806 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:24.806 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:24.806 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:24.806 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:24.806 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:24.806 + rm -f /tmp/spdk-ld-path 00:01:24.806 + source autorun-spdk.conf 00:01:24.806 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.806 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:24.806 ++ SPDK_TEST_FUZZER=1 00:01:24.806 ++ SPDK_TEST_SETUP=1 00:01:24.806 ++ SPDK_RUN_UBSAN=1 00:01:24.806 ++ RUN_NIGHTLY=0 00:01:24.806 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:24.806 + [[ -n '' ]] 00:01:24.806 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:24.806 + for M in /var/spdk/build-*-manifest.txt 00:01:24.806 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:24.806 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:24.806 + for M in /var/spdk/build-*-manifest.txt 00:01:24.806 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:24.806 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:24.806 + for M in /var/spdk/build-*-manifest.txt 00:01:24.806 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:24.806 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:24.806 ++ uname 00:01:24.806 + [[ Linux == \L\i\n\u\x ]] 00:01:24.806 + sudo dmesg -T 00:01:24.806 + sudo dmesg --clear 00:01:24.806 + dmesg_pid=2239225 00:01:24.806 + [[ Fedora Linux == FreeBSD ]] 00:01:24.806 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.806 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.806 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:24.806 + [[ -x /usr/src/fio-static/fio ]] 00:01:24.806 + export FIO_BIN=/usr/src/fio-static/fio 00:01:24.806 + FIO_BIN=/usr/src/fio-static/fio 00:01:24.806 + sudo dmesg -Tw 00:01:24.806 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:24.806 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:24.806 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:24.806 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.806 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.806 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:24.806 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.806 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.806 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:24.806 15:00:49 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:24.806 15:00:49 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:24.806 15:00:49 -- short-fuzz-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.806 15:00:49 -- short-fuzz-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_FUZZER_SHORT=1 00:01:24.806 15:00:49 -- short-fuzz-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_FUZZER=1 00:01:24.806 15:00:49 -- short-fuzz-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_SETUP=1 00:01:24.806 15:00:49 -- short-fuzz-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:01:24.806 15:00:49 -- short-fuzz-phy-autotest/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:24.806 15:00:49 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:24.806 15:00:49 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:24.806 15:00:50 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:24.806 15:00:50 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:24.806 15:00:50 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:24.806 15:00:50 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:24.806 15:00:50 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:24.806 15:00:50 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:24.806 15:00:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.806 15:00:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.806 15:00:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.806 15:00:50 -- paths/export.sh@5 -- $ export PATH 00:01:24.806 15:00:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.806 15:00:50 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:24.806 15:00:50 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:24.806 15:00:50 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732716050.XXXXXX 00:01:24.806 15:00:50 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732716050.NuTmxL 00:01:24.806 15:00:50 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:24.806 15:00:50 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:24.806 15:00:50 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:24.806 15:00:50 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:24.806 15:00:50 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:24.806 15:00:50 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:24.806 15:00:50 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:24.806 15:00:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.806 15:00:50 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:24.806 15:00:50 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:24.806 15:00:50 -- pm/common@17 -- $ local monitor 00:01:24.806 15:00:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.806 15:00:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.806 15:00:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.806 15:00:50 -- pm/common@21 -- $ date +%s 00:01:24.806 15:00:50 -- pm/common@21 -- $ date +%s 00:01:24.806 15:00:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.806 15:00:50 -- pm/common@25 -- $ sleep 1 00:01:24.806 15:00:50 -- pm/common@21 -- $ date +%s 00:01:24.806 15:00:50 -- pm/common@21 -- $ date +%s 00:01:24.806 15:00:50 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732716050 00:01:24.806 15:00:50 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732716050 00:01:24.806 15:00:50 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732716050 00:01:24.806 15:00:50 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732716050 00:01:25.064 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732716050_collect-vmstat.pm.log 00:01:25.064 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732716050_collect-cpu-load.pm.log 00:01:25.064 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732716050_collect-cpu-temp.pm.log 00:01:25.064 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732716050_collect-bmc-pm.bmc.pm.log 00:01:26.002 15:00:51 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:26.002 15:00:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:26.002 15:00:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:26.002 15:00:51 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:26.002 15:00:51 -- spdk/autobuild.sh@16 -- $ date -u 00:01:26.002 Wed Nov 27 02:00:51 PM UTC 2024 00:01:26.002 15:00:51 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:26.002 v25.01-pre-273-g2e10c84c8 00:01:26.002 15:00:51 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:26.002 15:00:51 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:26.002 15:00:51 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:26.002 15:00:51 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:26.002 15:00:51 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:26.002 15:00:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.002 ************************************ 00:01:26.002 START TEST ubsan 00:01:26.002 ************************************ 00:01:26.002 15:00:51 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:26.002 using ubsan 00:01:26.002 00:01:26.002 real 0m0.000s 00:01:26.002 user 0m0.000s 00:01:26.002 sys 0m0.000s 00:01:26.002 15:00:51 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:26.002 15:00:51 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:26.002 ************************************ 00:01:26.002 END TEST ubsan 00:01:26.002 ************************************ 00:01:26.002 15:00:51 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:26.002 15:00:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:26.002 15:00:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:26.002 15:00:51 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:26.002 15:00:51 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:26.002 15:00:51 -- common/autobuild_common.sh@445 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:26.002 15:00:51 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:26.002 15:00:51 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:26.002 15:00:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.002 ************************************ 00:01:26.002 START TEST autobuild_llvm_precompile 00:01:26.002 ************************************ 00:01:26.002 15:00:51 autobuild_llvm_precompile -- common/autotest_common.sh@1129 -- $ _llvm_precompile 00:01:26.002 15:00:51 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:01:26.002 15:00:51 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:01:26.002 Target: x86_64-redhat-linux-gnu 00:01:26.002 Thread model: posix 00:01:26.002 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:26.002 15:00:51 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:01:26.002 15:00:51 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:01:26.002 15:00:51 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:01:26.002 15:00:51 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:01:26.002 15:00:51 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:01:26.002 15:00:51 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:26.002 15:00:51 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:26.002 15:00:51 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:01:26.002 15:00:51 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:01:26.002 15:00:51 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:26.261 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:26.261 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:26.829 Using 'verbs' RDMA provider 00:01:42.658 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:54.871 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:54.871 Creating mk/config.mk...done. 00:01:54.871 Creating mk/cc.flags.mk...done. 00:01:54.871 Type 'make' to build. 00:01:54.871 00:01:54.871 real 0m28.445s 00:01:54.871 user 0m12.608s 00:01:54.871 sys 0m14.849s 00:01:54.871 15:01:19 autobuild_llvm_precompile -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:54.871 15:01:19 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:54.871 ************************************ 00:01:54.871 END TEST autobuild_llvm_precompile 00:01:54.871 ************************************ 00:01:54.871 15:01:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:54.871 15:01:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:54.871 15:01:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:54.871 15:01:19 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:54.871 15:01:19 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:54.871 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:54.871 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:55.130 Using 'verbs' RDMA provider 00:02:08.281 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:20.497 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:20.497 Creating mk/config.mk...done. 00:02:20.497 Creating mk/cc.flags.mk...done. 00:02:20.497 Type 'make' to build. 00:02:20.497 15:01:44 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:02:20.497 15:01:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:20.497 15:01:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:20.497 15:01:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.497 ************************************ 00:02:20.497 START TEST make 00:02:20.497 ************************************ 00:02:20.497 15:01:44 make -- common/autotest_common.sh@1129 -- $ make -j112 00:02:20.497 make[1]: Nothing to be done for 'all'. 00:02:20.497 help2man: can't get `--help' info from ./programs/igzip 00:02:20.497 Try `--no-discard-stderr' if option outputs to stderr 00:02:20.497 make[3]: [Makefile:4944: programs/igzip.1] Error 127 (ignored) 00:02:21.067 The Meson build system 00:02:21.067 Version: 1.5.0 00:02:21.067 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:02:21.067 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:21.067 Build type: native build 00:02:21.067 Project name: libvfio-user 00:02:21.067 Project version: 0.0.1 00:02:21.067 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:21.067 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:21.067 Host machine cpu family: x86_64 00:02:21.067 Host machine cpu: x86_64 00:02:21.067 Run-time dependency threads found: YES 00:02:21.067 Library dl found: YES 00:02:21.067 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:21.067 Run-time dependency json-c found: YES 0.17 00:02:21.067 Run-time dependency cmocka found: YES 1.1.7 00:02:21.067 Program pytest-3 found: NO 00:02:21.067 Program flake8 found: NO 00:02:21.067 Program misspell-fixer found: NO 00:02:21.067 Program restructuredtext-lint found: NO 00:02:21.067 Program valgrind found: YES (/usr/bin/valgrind) 00:02:21.067 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:21.067 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:21.067 Compiler for C supports arguments -Wwrite-strings: YES 00:02:21.067 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:21.067 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:21.067 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:21.068 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:21.068 Build targets in project: 8 00:02:21.068 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:21.068 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:21.068 00:02:21.068 libvfio-user 0.0.1 00:02:21.068 00:02:21.068 User defined options 00:02:21.068 buildtype : debug 00:02:21.068 default_library: static 00:02:21.068 libdir : /usr/local/lib 00:02:21.068 00:02:21.068 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:21.638 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:21.638 [1/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:02:21.638 [2/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:21.638 [3/36] Compiling C object samples/lspci.p/lspci.c.o 00:02:21.638 [4/36] Compiling C object samples/null.p/null.c.o 00:02:21.638 [5/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:21.638 [6/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:21.638 [7/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:02:21.638 [8/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:21.638 [9/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:21.638 [10/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:21.638 [11/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:21.638 [12/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:02:21.638 [13/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:02:21.638 [14/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:21.638 [15/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:21.638 [16/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:02:21.638 [17/36] Compiling C object test/unit_tests.p/mocks.c.o 00:02:21.638 [18/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:02:21.638 [19/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:02:21.638 [20/36] Compiling C object samples/server.p/server.c.o 00:02:21.638 [21/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:21.638 [22/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:21.638 [23/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:21.638 [24/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:21.638 [25/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:21.638 [26/36] Compiling C object samples/client.p/client.c.o 00:02:21.638 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:02:21.638 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:21.638 [29/36] Linking static target lib/libvfio-user.a 00:02:21.638 [30/36] Linking target samples/client 00:02:21.638 [31/36] Linking target test/unit_tests 00:02:21.638 [32/36] Linking target samples/null 00:02:21.638 [33/36] Linking target samples/server 00:02:21.638 [34/36] Linking target samples/lspci 00:02:21.638 [35/36] Linking target samples/shadow_ioeventfd_server 00:02:21.638 [36/36] Linking target samples/gpio-pci-idio-16 00:02:21.638 INFO: autodetecting backend as ninja 00:02:21.638 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:21.898 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:22.157 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:22.157 ninja: no work to do. 00:02:27.430 The Meson build system 00:02:27.430 Version: 1.5.0 00:02:27.430 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:27.430 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:27.430 Build type: native build 00:02:27.430 Program cat found: YES (/usr/bin/cat) 00:02:27.430 Project name: DPDK 00:02:27.430 Project version: 24.03.0 00:02:27.430 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:27.430 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:27.430 Host machine cpu family: x86_64 00:02:27.430 Host machine cpu: x86_64 00:02:27.430 Message: ## Building in Developer Mode ## 00:02:27.430 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:27.430 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:27.430 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:27.430 Program python3 found: YES (/usr/bin/python3) 00:02:27.430 Program cat found: YES (/usr/bin/cat) 00:02:27.430 Compiler for C supports arguments -march=native: YES 00:02:27.430 Checking for size of "void *" : 8 00:02:27.430 Checking for size of "void *" : 8 (cached) 00:02:27.430 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:27.430 Library m found: YES 00:02:27.430 Library numa found: YES 00:02:27.430 Has header "numaif.h" : YES 00:02:27.430 Library fdt found: NO 00:02:27.430 Library execinfo found: NO 00:02:27.430 Has header "execinfo.h" : YES 00:02:27.430 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:27.430 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:27.430 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:27.430 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:27.430 Run-time dependency openssl found: YES 3.1.1 00:02:27.430 Run-time dependency libpcap found: YES 1.10.4 00:02:27.430 Has header "pcap.h" with dependency libpcap: YES 00:02:27.430 Compiler for C supports arguments -Wcast-qual: YES 00:02:27.430 Compiler for C supports arguments -Wdeprecated: YES 00:02:27.430 Compiler for C supports arguments -Wformat: YES 00:02:27.430 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:27.430 Compiler for C supports arguments -Wformat-security: YES 00:02:27.430 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:27.430 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:27.430 Compiler for C supports arguments -Wnested-externs: YES 00:02:27.430 Compiler for C supports arguments -Wold-style-definition: YES 00:02:27.430 Compiler for C supports arguments -Wpointer-arith: YES 00:02:27.430 Compiler for C supports arguments -Wsign-compare: YES 00:02:27.431 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:27.431 Compiler for C supports arguments -Wundef: YES 00:02:27.431 Compiler for C supports arguments -Wwrite-strings: YES 00:02:27.431 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:27.431 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:27.431 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:27.431 Program objdump found: YES (/usr/bin/objdump) 00:02:27.431 Compiler for C supports arguments -mavx512f: YES 00:02:27.431 Checking if "AVX512 checking" compiles: YES 00:02:27.431 Fetching value of define "__SSE4_2__" : 1 00:02:27.431 Fetching value of define "__AES__" : 1 00:02:27.431 Fetching value of define "__AVX__" : 1 00:02:27.431 Fetching value of define "__AVX2__" : 1 00:02:27.431 Fetching value of define "__AVX512BW__" : 1 00:02:27.431 Fetching value of define "__AVX512CD__" : 1 00:02:27.431 Fetching value of define "__AVX512DQ__" : 1 00:02:27.431 Fetching value of define "__AVX512F__" : 1 00:02:27.431 Fetching value of define "__AVX512VL__" : 1 00:02:27.431 Fetching value of define "__PCLMUL__" : 1 00:02:27.431 Fetching value of define "__RDRND__" : 1 00:02:27.431 Fetching value of define "__RDSEED__" : 1 00:02:27.431 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:27.431 Fetching value of define "__znver1__" : (undefined) 00:02:27.431 Fetching value of define "__znver2__" : (undefined) 00:02:27.431 Fetching value of define "__znver3__" : (undefined) 00:02:27.431 Fetching value of define "__znver4__" : (undefined) 00:02:27.431 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:27.431 Message: lib/log: Defining dependency "log" 00:02:27.431 Message: lib/kvargs: Defining dependency "kvargs" 00:02:27.431 Message: lib/telemetry: Defining dependency "telemetry" 00:02:27.431 Checking for function "getentropy" : NO 00:02:27.431 Message: lib/eal: Defining dependency "eal" 00:02:27.431 Message: lib/ring: Defining dependency "ring" 00:02:27.431 Message: lib/rcu: Defining dependency "rcu" 00:02:27.431 Message: lib/mempool: Defining dependency "mempool" 00:02:27.431 Message: lib/mbuf: Defining dependency "mbuf" 00:02:27.431 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:27.431 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:27.431 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:27.431 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:27.431 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:27.431 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:27.431 Compiler for C supports arguments -mpclmul: YES 00:02:27.431 Compiler for C supports arguments -maes: YES 00:02:27.431 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:27.431 Compiler for C supports arguments -mavx512bw: YES 00:02:27.431 Compiler for C supports arguments -mavx512dq: YES 00:02:27.431 Compiler for C supports arguments -mavx512vl: YES 00:02:27.431 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:27.431 Compiler for C supports arguments -mavx2: YES 00:02:27.431 Compiler for C supports arguments -mavx: YES 00:02:27.431 Message: lib/net: Defining dependency "net" 00:02:27.431 Message: lib/meter: Defining dependency "meter" 00:02:27.431 Message: lib/ethdev: Defining dependency "ethdev" 00:02:27.431 Message: lib/pci: Defining dependency "pci" 00:02:27.431 Message: lib/cmdline: Defining dependency "cmdline" 00:02:27.431 Message: lib/hash: Defining dependency "hash" 00:02:27.431 Message: lib/timer: Defining dependency "timer" 00:02:27.431 Message: lib/compressdev: Defining dependency "compressdev" 00:02:27.431 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:27.431 Message: lib/dmadev: Defining dependency "dmadev" 00:02:27.431 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:27.431 Message: lib/power: Defining dependency "power" 00:02:27.431 Message: lib/reorder: Defining dependency "reorder" 00:02:27.431 Message: lib/security: Defining dependency "security" 00:02:27.431 Has header "linux/userfaultfd.h" : YES 00:02:27.431 Has header "linux/vduse.h" : YES 00:02:27.431 Message: lib/vhost: Defining dependency "vhost" 00:02:27.431 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:27.431 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:27.431 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:27.431 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:27.431 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:27.431 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:27.431 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:27.431 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:27.431 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:27.431 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:27.431 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:27.431 Configuring doxy-api-html.conf using configuration 00:02:27.431 Configuring doxy-api-man.conf using configuration 00:02:27.431 Program mandb found: YES (/usr/bin/mandb) 00:02:27.431 Program sphinx-build found: NO 00:02:27.431 Configuring rte_build_config.h using configuration 00:02:27.431 Message: 00:02:27.431 ================= 00:02:27.431 Applications Enabled 00:02:27.431 ================= 00:02:27.431 00:02:27.431 apps: 00:02:27.431 00:02:27.431 00:02:27.431 Message: 00:02:27.431 ================= 00:02:27.431 Libraries Enabled 00:02:27.431 ================= 00:02:27.431 00:02:27.431 libs: 00:02:27.431 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:27.431 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:27.431 cryptodev, dmadev, power, reorder, security, vhost, 00:02:27.431 00:02:27.431 Message: 00:02:27.431 =============== 00:02:27.431 Drivers Enabled 00:02:27.431 =============== 00:02:27.431 00:02:27.431 common: 00:02:27.431 00:02:27.431 bus: 00:02:27.431 pci, vdev, 00:02:27.431 mempool: 00:02:27.431 ring, 00:02:27.431 dma: 00:02:27.431 00:02:27.431 net: 00:02:27.431 00:02:27.431 crypto: 00:02:27.431 00:02:27.431 compress: 00:02:27.431 00:02:27.431 vdpa: 00:02:27.431 00:02:27.431 00:02:27.431 Message: 00:02:27.431 ================= 00:02:27.431 Content Skipped 00:02:27.431 ================= 00:02:27.431 00:02:27.431 apps: 00:02:27.431 dumpcap: explicitly disabled via build config 00:02:27.431 graph: explicitly disabled via build config 00:02:27.431 pdump: explicitly disabled via build config 00:02:27.431 proc-info: explicitly disabled via build config 00:02:27.431 test-acl: explicitly disabled via build config 00:02:27.431 test-bbdev: explicitly disabled via build config 00:02:27.431 test-cmdline: explicitly disabled via build config 00:02:27.431 test-compress-perf: explicitly disabled via build config 00:02:27.431 test-crypto-perf: explicitly disabled via build config 00:02:27.431 test-dma-perf: explicitly disabled via build config 00:02:27.431 test-eventdev: explicitly disabled via build config 00:02:27.431 test-fib: explicitly disabled via build config 00:02:27.431 test-flow-perf: explicitly disabled via build config 00:02:27.431 test-gpudev: explicitly disabled via build config 00:02:27.431 test-mldev: explicitly disabled via build config 00:02:27.431 test-pipeline: explicitly disabled via build config 00:02:27.431 test-pmd: explicitly disabled via build config 00:02:27.431 test-regex: explicitly disabled via build config 00:02:27.431 test-sad: explicitly disabled via build config 00:02:27.431 test-security-perf: explicitly disabled via build config 00:02:27.431 00:02:27.431 libs: 00:02:27.431 argparse: explicitly disabled via build config 00:02:27.431 metrics: explicitly disabled via build config 00:02:27.431 acl: explicitly disabled via build config 00:02:27.431 bbdev: explicitly disabled via build config 00:02:27.431 bitratestats: explicitly disabled via build config 00:02:27.431 bpf: explicitly disabled via build config 00:02:27.431 cfgfile: explicitly disabled via build config 00:02:27.431 distributor: explicitly disabled via build config 00:02:27.431 efd: explicitly disabled via build config 00:02:27.431 eventdev: explicitly disabled via build config 00:02:27.431 dispatcher: explicitly disabled via build config 00:02:27.431 gpudev: explicitly disabled via build config 00:02:27.431 gro: explicitly disabled via build config 00:02:27.431 gso: explicitly disabled via build config 00:02:27.431 ip_frag: explicitly disabled via build config 00:02:27.431 jobstats: explicitly disabled via build config 00:02:27.431 latencystats: explicitly disabled via build config 00:02:27.431 lpm: explicitly disabled via build config 00:02:27.431 member: explicitly disabled via build config 00:02:27.431 pcapng: explicitly disabled via build config 00:02:27.431 rawdev: explicitly disabled via build config 00:02:27.431 regexdev: explicitly disabled via build config 00:02:27.431 mldev: explicitly disabled via build config 00:02:27.431 rib: explicitly disabled via build config 00:02:27.431 sched: explicitly disabled via build config 00:02:27.431 stack: explicitly disabled via build config 00:02:27.431 ipsec: explicitly disabled via build config 00:02:27.431 pdcp: explicitly disabled via build config 00:02:27.431 fib: explicitly disabled via build config 00:02:27.431 port: explicitly disabled via build config 00:02:27.431 pdump: explicitly disabled via build config 00:02:27.431 table: explicitly disabled via build config 00:02:27.431 pipeline: explicitly disabled via build config 00:02:27.431 graph: explicitly disabled via build config 00:02:27.431 node: explicitly disabled via build config 00:02:27.432 00:02:27.432 drivers: 00:02:27.432 common/cpt: not in enabled drivers build config 00:02:27.432 common/dpaax: not in enabled drivers build config 00:02:27.432 common/iavf: not in enabled drivers build config 00:02:27.432 common/idpf: not in enabled drivers build config 00:02:27.432 common/ionic: not in enabled drivers build config 00:02:27.432 common/mvep: not in enabled drivers build config 00:02:27.432 common/octeontx: not in enabled drivers build config 00:02:27.432 bus/auxiliary: not in enabled drivers build config 00:02:27.432 bus/cdx: not in enabled drivers build config 00:02:27.432 bus/dpaa: not in enabled drivers build config 00:02:27.432 bus/fslmc: not in enabled drivers build config 00:02:27.432 bus/ifpga: not in enabled drivers build config 00:02:27.432 bus/platform: not in enabled drivers build config 00:02:27.432 bus/uacce: not in enabled drivers build config 00:02:27.432 bus/vmbus: not in enabled drivers build config 00:02:27.432 common/cnxk: not in enabled drivers build config 00:02:27.432 common/mlx5: not in enabled drivers build config 00:02:27.432 common/nfp: not in enabled drivers build config 00:02:27.432 common/nitrox: not in enabled drivers build config 00:02:27.432 common/qat: not in enabled drivers build config 00:02:27.432 common/sfc_efx: not in enabled drivers build config 00:02:27.432 mempool/bucket: not in enabled drivers build config 00:02:27.432 mempool/cnxk: not in enabled drivers build config 00:02:27.432 mempool/dpaa: not in enabled drivers build config 00:02:27.432 mempool/dpaa2: not in enabled drivers build config 00:02:27.432 mempool/octeontx: not in enabled drivers build config 00:02:27.432 mempool/stack: not in enabled drivers build config 00:02:27.432 dma/cnxk: not in enabled drivers build config 00:02:27.432 dma/dpaa: not in enabled drivers build config 00:02:27.432 dma/dpaa2: not in enabled drivers build config 00:02:27.432 dma/hisilicon: not in enabled drivers build config 00:02:27.432 dma/idxd: not in enabled drivers build config 00:02:27.432 dma/ioat: not in enabled drivers build config 00:02:27.432 dma/skeleton: not in enabled drivers build config 00:02:27.432 net/af_packet: not in enabled drivers build config 00:02:27.432 net/af_xdp: not in enabled drivers build config 00:02:27.432 net/ark: not in enabled drivers build config 00:02:27.432 net/atlantic: not in enabled drivers build config 00:02:27.432 net/avp: not in enabled drivers build config 00:02:27.432 net/axgbe: not in enabled drivers build config 00:02:27.432 net/bnx2x: not in enabled drivers build config 00:02:27.432 net/bnxt: not in enabled drivers build config 00:02:27.432 net/bonding: not in enabled drivers build config 00:02:27.432 net/cnxk: not in enabled drivers build config 00:02:27.432 net/cpfl: not in enabled drivers build config 00:02:27.432 net/cxgbe: not in enabled drivers build config 00:02:27.432 net/dpaa: not in enabled drivers build config 00:02:27.432 net/dpaa2: not in enabled drivers build config 00:02:27.432 net/e1000: not in enabled drivers build config 00:02:27.432 net/ena: not in enabled drivers build config 00:02:27.432 net/enetc: not in enabled drivers build config 00:02:27.432 net/enetfec: not in enabled drivers build config 00:02:27.432 net/enic: not in enabled drivers build config 00:02:27.432 net/failsafe: not in enabled drivers build config 00:02:27.432 net/fm10k: not in enabled drivers build config 00:02:27.432 net/gve: not in enabled drivers build config 00:02:27.432 net/hinic: not in enabled drivers build config 00:02:27.432 net/hns3: not in enabled drivers build config 00:02:27.432 net/i40e: not in enabled drivers build config 00:02:27.432 net/iavf: not in enabled drivers build config 00:02:27.432 net/ice: not in enabled drivers build config 00:02:27.432 net/idpf: not in enabled drivers build config 00:02:27.432 net/igc: not in enabled drivers build config 00:02:27.432 net/ionic: not in enabled drivers build config 00:02:27.432 net/ipn3ke: not in enabled drivers build config 00:02:27.432 net/ixgbe: not in enabled drivers build config 00:02:27.432 net/mana: not in enabled drivers build config 00:02:27.432 net/memif: not in enabled drivers build config 00:02:27.432 net/mlx4: not in enabled drivers build config 00:02:27.432 net/mlx5: not in enabled drivers build config 00:02:27.432 net/mvneta: not in enabled drivers build config 00:02:27.432 net/mvpp2: not in enabled drivers build config 00:02:27.432 net/netvsc: not in enabled drivers build config 00:02:27.432 net/nfb: not in enabled drivers build config 00:02:27.432 net/nfp: not in enabled drivers build config 00:02:27.432 net/ngbe: not in enabled drivers build config 00:02:27.432 net/null: not in enabled drivers build config 00:02:27.432 net/octeontx: not in enabled drivers build config 00:02:27.432 net/octeon_ep: not in enabled drivers build config 00:02:27.432 net/pcap: not in enabled drivers build config 00:02:27.432 net/pfe: not in enabled drivers build config 00:02:27.432 net/qede: not in enabled drivers build config 00:02:27.432 net/ring: not in enabled drivers build config 00:02:27.432 net/sfc: not in enabled drivers build config 00:02:27.432 net/softnic: not in enabled drivers build config 00:02:27.432 net/tap: not in enabled drivers build config 00:02:27.432 net/thunderx: not in enabled drivers build config 00:02:27.432 net/txgbe: not in enabled drivers build config 00:02:27.432 net/vdev_netvsc: not in enabled drivers build config 00:02:27.432 net/vhost: not in enabled drivers build config 00:02:27.432 net/virtio: not in enabled drivers build config 00:02:27.432 net/vmxnet3: not in enabled drivers build config 00:02:27.432 raw/*: missing internal dependency, "rawdev" 00:02:27.432 crypto/armv8: not in enabled drivers build config 00:02:27.432 crypto/bcmfs: not in enabled drivers build config 00:02:27.432 crypto/caam_jr: not in enabled drivers build config 00:02:27.432 crypto/ccp: not in enabled drivers build config 00:02:27.432 crypto/cnxk: not in enabled drivers build config 00:02:27.432 crypto/dpaa_sec: not in enabled drivers build config 00:02:27.432 crypto/dpaa2_sec: not in enabled drivers build config 00:02:27.432 crypto/ipsec_mb: not in enabled drivers build config 00:02:27.432 crypto/mlx5: not in enabled drivers build config 00:02:27.432 crypto/mvsam: not in enabled drivers build config 00:02:27.432 crypto/nitrox: not in enabled drivers build config 00:02:27.432 crypto/null: not in enabled drivers build config 00:02:27.432 crypto/octeontx: not in enabled drivers build config 00:02:27.432 crypto/openssl: not in enabled drivers build config 00:02:27.432 crypto/scheduler: not in enabled drivers build config 00:02:27.432 crypto/uadk: not in enabled drivers build config 00:02:27.432 crypto/virtio: not in enabled drivers build config 00:02:27.432 compress/isal: not in enabled drivers build config 00:02:27.432 compress/mlx5: not in enabled drivers build config 00:02:27.432 compress/nitrox: not in enabled drivers build config 00:02:27.432 compress/octeontx: not in enabled drivers build config 00:02:27.432 compress/zlib: not in enabled drivers build config 00:02:27.432 regex/*: missing internal dependency, "regexdev" 00:02:27.432 ml/*: missing internal dependency, "mldev" 00:02:27.432 vdpa/ifc: not in enabled drivers build config 00:02:27.432 vdpa/mlx5: not in enabled drivers build config 00:02:27.432 vdpa/nfp: not in enabled drivers build config 00:02:27.432 vdpa/sfc: not in enabled drivers build config 00:02:27.432 event/*: missing internal dependency, "eventdev" 00:02:27.432 baseband/*: missing internal dependency, "bbdev" 00:02:27.432 gpu/*: missing internal dependency, "gpudev" 00:02:27.432 00:02:27.432 00:02:27.692 Build targets in project: 85 00:02:27.692 00:02:27.692 DPDK 24.03.0 00:02:27.692 00:02:27.692 User defined options 00:02:27.692 buildtype : debug 00:02:27.692 default_library : static 00:02:27.692 libdir : lib 00:02:27.692 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:27.692 c_args : -fPIC -Werror 00:02:27.692 c_link_args : 00:02:27.692 cpu_instruction_set: native 00:02:27.692 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:27.692 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:27.692 enable_docs : false 00:02:27.692 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:27.692 enable_kmods : false 00:02:27.692 max_lcores : 128 00:02:27.692 tests : false 00:02:27.692 00:02:27.692 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:28.274 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:28.274 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:28.274 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:28.274 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:28.274 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:28.275 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:28.275 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:28.275 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:28.275 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:28.275 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:28.275 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:28.275 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:28.275 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:28.275 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:28.275 [14/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:28.275 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:28.275 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:28.275 [17/268] Linking static target lib/librte_kvargs.a 00:02:28.275 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:28.275 [19/268] Linking static target lib/librte_log.a 00:02:28.275 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:28.275 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:28.275 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:28.275 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:28.275 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:28.275 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:28.275 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:28.275 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:28.275 [28/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:28.275 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:28.275 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:28.275 [31/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:28.275 [32/268] Linking static target lib/librte_pci.a 00:02:28.275 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:28.534 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:28.534 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:28.534 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:28.793 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:28.793 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:28.793 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:28.793 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:28.793 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:28.793 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:28.793 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:28.793 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:28.793 [45/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:28.793 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:28.793 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:28.793 [48/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:28.793 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:28.793 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:28.793 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:28.793 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:28.793 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:28.793 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:28.793 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:28.793 [56/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:28.793 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:28.793 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:28.793 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:28.793 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:28.793 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:28.793 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:28.793 [63/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:28.793 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:28.793 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:28.793 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:28.793 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:28.793 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:28.793 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:28.793 [70/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:28.793 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:28.793 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:28.793 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:28.793 [74/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:28.793 [75/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:28.793 [76/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:28.793 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:28.793 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:28.793 [79/268] Linking static target lib/librte_meter.a 00:02:28.793 [80/268] Linking static target lib/librte_telemetry.a 00:02:28.793 [81/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:28.793 [82/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:28.793 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:28.793 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:28.793 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:28.793 [86/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:28.793 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:28.793 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:28.793 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:28.793 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:28.793 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:28.793 [92/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:28.793 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:28.793 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:28.793 [95/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:28.793 [96/268] Linking static target lib/librte_ring.a 00:02:28.793 [97/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:28.793 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:28.793 [99/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.793 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:28.793 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:28.793 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:28.793 [103/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:28.793 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:28.793 [105/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.793 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:28.793 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:28.793 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:28.793 [109/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:28.793 [110/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:28.793 [111/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:28.793 [112/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:28.793 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:28.793 [114/268] Linking static target lib/librte_cmdline.a 00:02:28.793 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:28.793 [116/268] Linking static target lib/librte_mempool.a 00:02:28.793 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:28.793 [118/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:28.793 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:28.793 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:28.793 [121/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:28.793 [122/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:28.793 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:28.793 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:28.793 [125/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:28.793 [126/268] Linking static target lib/librte_timer.a 00:02:28.793 [127/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:28.793 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:28.793 [129/268] Linking static target lib/librte_rcu.a 00:02:28.794 [130/268] Linking static target lib/librte_eal.a 00:02:28.794 [131/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:28.794 [132/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:28.794 [133/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:28.794 [134/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:28.794 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:28.794 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:28.794 [137/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:28.794 [138/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:28.794 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:28.794 [140/268] Linking static target lib/librte_net.a 00:02:29.053 [141/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:29.053 [142/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:29.053 [143/268] Linking static target lib/librte_dmadev.a 00:02:29.053 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:29.053 [145/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:29.053 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:29.053 [147/268] Linking static target lib/librte_compressdev.a 00:02:29.053 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:29.053 [149/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:29.053 [150/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.053 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:29.053 [152/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.053 [153/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:29.053 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:29.053 [155/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:29.053 [156/268] Linking static target lib/librte_mbuf.a 00:02:29.053 [157/268] Linking target lib/librte_log.so.24.1 00:02:29.053 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:29.053 [159/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.053 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:29.053 [161/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:29.053 [162/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:29.053 [163/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:29.053 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:29.053 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:29.053 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:29.053 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:29.053 [168/268] Linking static target lib/librte_hash.a 00:02:29.053 [169/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:29.053 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:29.053 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:29.053 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:29.053 [173/268] Linking static target lib/librte_cryptodev.a 00:02:29.312 [174/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:29.312 [175/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:29.312 [176/268] Linking static target lib/librte_power.a 00:02:29.312 [177/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:29.312 [178/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:29.312 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:29.312 [180/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.312 [181/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:29.312 [182/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:29.312 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:29.312 [184/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.312 [185/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.313 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:29.313 [187/268] Linking target lib/librte_kvargs.so.24.1 00:02:29.313 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:29.313 [189/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.313 [190/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:29.313 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:29.313 [192/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:29.313 [193/268] Linking target lib/librte_telemetry.so.24.1 00:02:29.313 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:29.313 [195/268] Linking static target lib/librte_security.a 00:02:29.313 [196/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:29.313 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:29.313 [198/268] Linking static target lib/librte_reorder.a 00:02:29.313 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:29.313 [200/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:29.313 [201/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:29.313 [202/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:29.313 [203/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:29.313 [204/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:29.313 [205/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:29.573 [206/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:29.573 [207/268] Linking static target drivers/librte_mempool_ring.a 00:02:29.573 [208/268] Linking static target drivers/librte_bus_vdev.a 00:02:29.573 [209/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:29.573 [210/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:29.573 [211/268] Linking static target lib/librte_ethdev.a 00:02:29.573 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:29.573 [213/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:29.573 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:29.573 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:29.573 [216/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.573 [217/268] Linking static target drivers/librte_bus_pci.a 00:02:29.573 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.833 [219/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.833 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.833 [221/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.833 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.092 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.092 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.092 [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.092 [226/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.351 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:30.351 [228/268] Linking static target lib/librte_vhost.a 00:02:30.351 [229/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.290 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.671 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.262 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.806 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.806 [234/268] Linking target lib/librte_eal.so.24.1 00:02:41.806 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:41.806 [236/268] Linking target lib/librte_ring.so.24.1 00:02:41.806 [237/268] Linking target lib/librte_pci.so.24.1 00:02:41.806 [238/268] Linking target lib/librte_meter.so.24.1 00:02:41.806 [239/268] Linking target lib/librte_timer.so.24.1 00:02:41.806 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:41.806 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:42.066 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:42.066 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:42.066 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:42.066 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:42.066 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:42.066 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:42.066 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:42.066 [249/268] Linking target lib/librte_rcu.so.24.1 00:02:42.066 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:42.066 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:42.325 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:42.325 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:42.325 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:42.586 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:42.586 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:42.586 [257/268] Linking target lib/librte_net.so.24.1 00:02:42.586 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:42.586 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:42.586 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:42.586 [261/268] Linking target lib/librte_security.so.24.1 00:02:42.586 [262/268] Linking target lib/librte_hash.so.24.1 00:02:42.586 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:42.586 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:42.846 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:42.846 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:42.846 [267/268] Linking target lib/librte_power.so.24.1 00:02:42.846 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:42.846 INFO: autodetecting backend as ninja 00:02:42.846 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:43.789 CC lib/ut_mock/mock.o 00:02:43.789 CC lib/ut/ut.o 00:02:43.789 CC lib/log/log.o 00:02:43.789 CC lib/log/log_flags.o 00:02:43.789 CC lib/log/log_deprecated.o 00:02:44.051 LIB libspdk_ut_mock.a 00:02:44.051 LIB libspdk_ut.a 00:02:44.051 LIB libspdk_log.a 00:02:44.312 CC lib/ioat/ioat.o 00:02:44.312 CC lib/dma/dma.o 00:02:44.312 CC lib/util/cpuset.o 00:02:44.312 CC lib/util/base64.o 00:02:44.312 CC lib/util/bit_array.o 00:02:44.312 CC lib/util/crc32.o 00:02:44.312 CC lib/util/crc16.o 00:02:44.312 CC lib/util/crc64.o 00:02:44.312 CC lib/util/crc32c.o 00:02:44.312 CC lib/util/crc32_ieee.o 00:02:44.312 CC lib/util/fd_group.o 00:02:44.312 CC lib/util/dif.o 00:02:44.312 CC lib/util/fd.o 00:02:44.312 CXX lib/trace_parser/trace.o 00:02:44.312 CC lib/util/file.o 00:02:44.312 CC lib/util/hexlify.o 00:02:44.312 CC lib/util/iov.o 00:02:44.312 CC lib/util/math.o 00:02:44.312 CC lib/util/net.o 00:02:44.312 CC lib/util/pipe.o 00:02:44.312 CC lib/util/strerror_tls.o 00:02:44.312 CC lib/util/string.o 00:02:44.312 CC lib/util/uuid.o 00:02:44.312 CC lib/util/xor.o 00:02:44.312 CC lib/util/zipf.o 00:02:44.312 CC lib/util/md5.o 00:02:44.572 CC lib/vfio_user/host/vfio_user_pci.o 00:02:44.572 CC lib/vfio_user/host/vfio_user.o 00:02:44.572 LIB libspdk_dma.a 00:02:44.572 LIB libspdk_ioat.a 00:02:44.572 LIB libspdk_vfio_user.a 00:02:44.831 LIB libspdk_util.a 00:02:44.831 LIB libspdk_trace_parser.a 00:02:45.089 CC lib/conf/conf.o 00:02:45.089 CC lib/rdma_utils/rdma_utils.o 00:02:45.089 CC lib/vmd/vmd.o 00:02:45.089 CC lib/idxd/idxd.o 00:02:45.089 CC lib/vmd/led.o 00:02:45.089 CC lib/json/json_parse.o 00:02:45.089 CC lib/json/json_util.o 00:02:45.089 CC lib/idxd/idxd_user.o 00:02:45.089 CC lib/idxd/idxd_kernel.o 00:02:45.090 CC lib/json/json_write.o 00:02:45.090 CC lib/env_dpdk/env.o 00:02:45.090 CC lib/env_dpdk/memory.o 00:02:45.090 CC lib/env_dpdk/pci.o 00:02:45.090 CC lib/env_dpdk/pci_ioat.o 00:02:45.090 CC lib/env_dpdk/init.o 00:02:45.090 CC lib/env_dpdk/threads.o 00:02:45.090 CC lib/env_dpdk/pci_virtio.o 00:02:45.090 CC lib/env_dpdk/pci_event.o 00:02:45.090 CC lib/env_dpdk/pci_vmd.o 00:02:45.090 CC lib/env_dpdk/sigbus_handler.o 00:02:45.090 CC lib/env_dpdk/pci_idxd.o 00:02:45.090 CC lib/env_dpdk/pci_dpdk.o 00:02:45.090 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:45.090 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:45.090 LIB libspdk_conf.a 00:02:45.347 LIB libspdk_rdma_utils.a 00:02:45.347 LIB libspdk_json.a 00:02:45.347 LIB libspdk_idxd.a 00:02:45.347 LIB libspdk_vmd.a 00:02:45.605 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:45.605 CC lib/rdma_provider/common.o 00:02:45.605 CC lib/jsonrpc/jsonrpc_server.o 00:02:45.605 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:45.605 CC lib/jsonrpc/jsonrpc_client.o 00:02:45.605 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:45.605 LIB libspdk_rdma_provider.a 00:02:45.605 LIB libspdk_jsonrpc.a 00:02:45.863 LIB libspdk_env_dpdk.a 00:02:46.122 CC lib/rpc/rpc.o 00:02:46.122 LIB libspdk_rpc.a 00:02:46.380 CC lib/notify/notify.o 00:02:46.380 CC lib/notify/notify_rpc.o 00:02:46.639 CC lib/trace/trace.o 00:02:46.639 CC lib/keyring/keyring.o 00:02:46.639 CC lib/trace/trace_flags.o 00:02:46.639 CC lib/keyring/keyring_rpc.o 00:02:46.639 CC lib/trace/trace_rpc.o 00:02:46.639 LIB libspdk_notify.a 00:02:46.639 LIB libspdk_keyring.a 00:02:46.639 LIB libspdk_trace.a 00:02:46.897 CC lib/thread/iobuf.o 00:02:46.897 CC lib/thread/thread.o 00:02:46.897 CC lib/sock/sock.o 00:02:46.897 CC lib/sock/sock_rpc.o 00:02:47.156 LIB libspdk_sock.a 00:02:47.723 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:47.723 CC lib/nvme/nvme_ns_cmd.o 00:02:47.724 CC lib/nvme/nvme_ctrlr.o 00:02:47.724 CC lib/nvme/nvme_fabric.o 00:02:47.724 CC lib/nvme/nvme_pcie_common.o 00:02:47.724 CC lib/nvme/nvme_ns.o 00:02:47.724 CC lib/nvme/nvme_qpair.o 00:02:47.724 CC lib/nvme/nvme_pcie.o 00:02:47.724 CC lib/nvme/nvme.o 00:02:47.724 CC lib/nvme/nvme_discovery.o 00:02:47.724 CC lib/nvme/nvme_quirks.o 00:02:47.724 CC lib/nvme/nvme_transport.o 00:02:47.724 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:47.724 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:47.724 CC lib/nvme/nvme_tcp.o 00:02:47.724 CC lib/nvme/nvme_opal.o 00:02:47.724 CC lib/nvme/nvme_io_msg.o 00:02:47.724 CC lib/nvme/nvme_poll_group.o 00:02:47.724 CC lib/nvme/nvme_zns.o 00:02:47.724 CC lib/nvme/nvme_stubs.o 00:02:47.724 CC lib/nvme/nvme_auth.o 00:02:47.724 CC lib/nvme/nvme_cuse.o 00:02:47.724 CC lib/nvme/nvme_vfio_user.o 00:02:47.724 CC lib/nvme/nvme_rdma.o 00:02:47.724 LIB libspdk_thread.a 00:02:47.982 CC lib/virtio/virtio.o 00:02:47.982 CC lib/virtio/virtio_vhost_user.o 00:02:47.982 CC lib/virtio/virtio_vfio_user.o 00:02:47.982 CC lib/virtio/virtio_pci.o 00:02:47.982 CC lib/accel/accel.o 00:02:47.982 CC lib/blob/blobstore.o 00:02:47.982 CC lib/init/json_config.o 00:02:47.982 CC lib/init/subsystem.o 00:02:47.982 CC lib/blob/request.o 00:02:47.982 CC lib/init/subsystem_rpc.o 00:02:47.982 CC lib/accel/accel_rpc.o 00:02:47.982 CC lib/init/rpc.o 00:02:47.982 CC lib/accel/accel_sw.o 00:02:47.982 CC lib/blob/zeroes.o 00:02:47.982 CC lib/blob/blob_bs_dev.o 00:02:47.982 CC lib/vfu_tgt/tgt_rpc.o 00:02:47.982 CC lib/vfu_tgt/tgt_endpoint.o 00:02:47.982 CC lib/fsdev/fsdev.o 00:02:47.982 CC lib/fsdev/fsdev_io.o 00:02:47.982 CC lib/fsdev/fsdev_rpc.o 00:02:48.241 LIB libspdk_init.a 00:02:48.241 LIB libspdk_virtio.a 00:02:48.241 LIB libspdk_vfu_tgt.a 00:02:48.500 LIB libspdk_fsdev.a 00:02:48.500 CC lib/event/app.o 00:02:48.500 CC lib/event/reactor.o 00:02:48.500 CC lib/event/log_rpc.o 00:02:48.500 CC lib/event/app_rpc.o 00:02:48.500 CC lib/event/scheduler_static.o 00:02:48.759 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:48.759 LIB libspdk_event.a 00:02:48.759 LIB libspdk_accel.a 00:02:48.759 LIB libspdk_nvme.a 00:02:49.101 CC lib/bdev/bdev.o 00:02:49.101 CC lib/bdev/bdev_rpc.o 00:02:49.101 CC lib/bdev/bdev_zone.o 00:02:49.101 CC lib/bdev/part.o 00:02:49.101 CC lib/bdev/scsi_nvme.o 00:02:49.101 LIB libspdk_fuse_dispatcher.a 00:02:49.698 LIB libspdk_blob.a 00:02:49.958 CC lib/blobfs/blobfs.o 00:02:49.958 CC lib/blobfs/tree.o 00:02:49.958 CC lib/lvol/lvol.o 00:02:50.527 LIB libspdk_lvol.a 00:02:50.527 LIB libspdk_blobfs.a 00:02:50.787 LIB libspdk_bdev.a 00:02:51.047 CC lib/nbd/nbd.o 00:02:51.047 CC lib/scsi/lun.o 00:02:51.047 CC lib/scsi/dev.o 00:02:51.047 CC lib/scsi/port.o 00:02:51.047 CC lib/nbd/nbd_rpc.o 00:02:51.047 CC lib/scsi/scsi.o 00:02:51.047 CC lib/scsi/scsi_bdev.o 00:02:51.047 CC lib/scsi/scsi_pr.o 00:02:51.047 CC lib/ublk/ublk.o 00:02:51.047 CC lib/scsi/scsi_rpc.o 00:02:51.047 CC lib/scsi/task.o 00:02:51.047 CC lib/ublk/ublk_rpc.o 00:02:51.047 CC lib/ftl/ftl_init.o 00:02:51.047 CC lib/ftl/ftl_core.o 00:02:51.047 CC lib/ftl/ftl_debug.o 00:02:51.047 CC lib/ftl/ftl_layout.o 00:02:51.047 CC lib/ftl/ftl_sb.o 00:02:51.047 CC lib/ftl/ftl_io.o 00:02:51.047 CC lib/ftl/ftl_l2p_flat.o 00:02:51.047 CC lib/ftl/ftl_l2p.o 00:02:51.047 CC lib/nvmf/ctrlr_bdev.o 00:02:51.047 CC lib/nvmf/ctrlr.o 00:02:51.047 CC lib/ftl/ftl_nv_cache.o 00:02:51.047 CC lib/nvmf/ctrlr_discovery.o 00:02:51.047 CC lib/ftl/ftl_band.o 00:02:51.047 CC lib/ftl/ftl_band_ops.o 00:02:51.047 CC lib/nvmf/subsystem.o 00:02:51.047 CC lib/ftl/ftl_writer.o 00:02:51.047 CC lib/nvmf/nvmf.o 00:02:51.047 CC lib/ftl/ftl_rq.o 00:02:51.047 CC lib/ftl/ftl_reloc.o 00:02:51.047 CC lib/nvmf/nvmf_rpc.o 00:02:51.047 CC lib/ftl/ftl_l2p_cache.o 00:02:51.047 CC lib/nvmf/transport.o 00:02:51.047 CC lib/ftl/ftl_p2l.o 00:02:51.047 CC lib/nvmf/tcp.o 00:02:51.047 CC lib/ftl/ftl_p2l_log.o 00:02:51.047 CC lib/nvmf/stubs.o 00:02:51.047 CC lib/ftl/mngt/ftl_mngt.o 00:02:51.047 CC lib/nvmf/mdns_server.o 00:02:51.047 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:51.047 CC lib/nvmf/vfio_user.o 00:02:51.047 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:51.047 CC lib/nvmf/rdma.o 00:02:51.047 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:51.047 CC lib/nvmf/auth.o 00:02:51.047 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:51.047 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:51.047 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:51.047 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:51.047 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:51.047 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:51.047 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:51.047 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:51.047 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:51.047 CC lib/ftl/utils/ftl_md.o 00:02:51.047 CC lib/ftl/utils/ftl_conf.o 00:02:51.047 CC lib/ftl/utils/ftl_mempool.o 00:02:51.047 CC lib/ftl/utils/ftl_bitmap.o 00:02:51.047 CC lib/ftl/utils/ftl_property.o 00:02:51.047 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:51.047 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:51.047 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:51.047 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:51.047 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:51.047 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:51.047 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:51.047 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:51.306 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:51.306 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:51.306 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:51.306 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:51.306 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:51.306 CC lib/ftl/base/ftl_base_dev.o 00:02:51.306 CC lib/ftl/base/ftl_base_bdev.o 00:02:51.306 CC lib/ftl/ftl_trace.o 00:02:51.564 LIB libspdk_scsi.a 00:02:51.564 LIB libspdk_nbd.a 00:02:51.564 LIB libspdk_ublk.a 00:02:51.824 CC lib/iscsi/iscsi.o 00:02:51.824 CC lib/iscsi/conn.o 00:02:51.824 CC lib/iscsi/param.o 00:02:51.824 CC lib/iscsi/init_grp.o 00:02:51.824 CC lib/iscsi/tgt_node.o 00:02:51.824 CC lib/iscsi/portal_grp.o 00:02:51.824 CC lib/iscsi/task.o 00:02:51.824 CC lib/iscsi/iscsi_subsystem.o 00:02:51.824 CC lib/iscsi/iscsi_rpc.o 00:02:51.824 LIB libspdk_ftl.a 00:02:51.824 CC lib/vhost/vhost.o 00:02:51.824 CC lib/vhost/vhost_rpc.o 00:02:51.824 CC lib/vhost/vhost_scsi.o 00:02:51.824 CC lib/vhost/vhost_blk.o 00:02:51.824 CC lib/vhost/rte_vhost_user.o 00:02:52.392 LIB libspdk_nvmf.a 00:02:52.392 LIB libspdk_vhost.a 00:02:52.392 LIB libspdk_iscsi.a 00:02:52.960 CC module/vfu_device/vfu_virtio.o 00:02:52.960 CC module/vfu_device/vfu_virtio_rpc.o 00:02:52.960 CC module/vfu_device/vfu_virtio_blk.o 00:02:52.960 CC module/vfu_device/vfu_virtio_scsi.o 00:02:52.960 CC module/vfu_device/vfu_virtio_fs.o 00:02:52.960 CC module/env_dpdk/env_dpdk_rpc.o 00:02:52.960 CC module/keyring/file/keyring.o 00:02:52.960 CC module/keyring/file/keyring_rpc.o 00:02:52.960 CC module/keyring/linux/keyring.o 00:02:52.960 CC module/keyring/linux/keyring_rpc.o 00:02:52.960 CC module/accel/ioat/accel_ioat.o 00:02:52.960 CC module/accel/ioat/accel_ioat_rpc.o 00:02:52.960 CC module/scheduler/gscheduler/gscheduler.o 00:02:53.217 LIB libspdk_env_dpdk_rpc.a 00:02:53.217 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:53.217 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:53.217 CC module/fsdev/aio/fsdev_aio.o 00:02:53.217 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:53.217 CC module/fsdev/aio/linux_aio_mgr.o 00:02:53.217 CC module/accel/dsa/accel_dsa.o 00:02:53.217 CC module/accel/dsa/accel_dsa_rpc.o 00:02:53.217 CC module/accel/iaa/accel_iaa.o 00:02:53.217 CC module/sock/posix/posix.o 00:02:53.217 CC module/accel/iaa/accel_iaa_rpc.o 00:02:53.217 CC module/accel/error/accel_error.o 00:02:53.217 CC module/accel/error/accel_error_rpc.o 00:02:53.217 CC module/blob/bdev/blob_bdev.o 00:02:53.217 LIB libspdk_keyring_linux.a 00:02:53.217 LIB libspdk_keyring_file.a 00:02:53.217 LIB libspdk_scheduler_gscheduler.a 00:02:53.217 LIB libspdk_scheduler_dpdk_governor.a 00:02:53.217 LIB libspdk_scheduler_dynamic.a 00:02:53.217 LIB libspdk_accel_ioat.a 00:02:53.217 LIB libspdk_accel_error.a 00:02:53.217 LIB libspdk_accel_iaa.a 00:02:53.217 LIB libspdk_blob_bdev.a 00:02:53.217 LIB libspdk_accel_dsa.a 00:02:53.476 LIB libspdk_vfu_device.a 00:02:53.476 LIB libspdk_sock_posix.a 00:02:53.476 LIB libspdk_fsdev_aio.a 00:02:53.733 CC module/bdev/gpt/gpt.o 00:02:53.733 CC module/bdev/gpt/vbdev_gpt.o 00:02:53.733 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:53.733 CC module/blobfs/bdev/blobfs_bdev.o 00:02:53.733 CC module/bdev/nvme/bdev_nvme.o 00:02:53.733 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:53.733 CC module/bdev/nvme/nvme_rpc.o 00:02:53.733 CC module/bdev/error/vbdev_error_rpc.o 00:02:53.733 CC module/bdev/nvme/bdev_mdns_client.o 00:02:53.733 CC module/bdev/error/vbdev_error.o 00:02:53.733 CC module/bdev/nvme/vbdev_opal.o 00:02:53.733 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:53.733 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:53.733 CC module/bdev/raid/bdev_raid.o 00:02:53.733 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:53.733 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:53.733 CC module/bdev/raid/bdev_raid_sb.o 00:02:53.733 CC module/bdev/delay/vbdev_delay.o 00:02:53.733 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:53.733 CC module/bdev/raid/concat.o 00:02:53.733 CC module/bdev/raid/bdev_raid_rpc.o 00:02:53.733 CC module/bdev/malloc/bdev_malloc.o 00:02:53.733 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:53.733 CC module/bdev/raid/raid0.o 00:02:53.733 CC module/bdev/raid/raid1.o 00:02:53.733 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:53.733 CC module/bdev/iscsi/bdev_iscsi.o 00:02:53.733 CC module/bdev/null/bdev_null.o 00:02:53.733 CC module/bdev/null/bdev_null_rpc.o 00:02:53.733 CC module/bdev/passthru/vbdev_passthru.o 00:02:53.733 CC module/bdev/aio/bdev_aio.o 00:02:53.733 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:53.733 CC module/bdev/aio/bdev_aio_rpc.o 00:02:53.733 CC module/bdev/split/vbdev_split.o 00:02:53.733 CC module/bdev/split/vbdev_split_rpc.o 00:02:53.733 CC module/bdev/lvol/vbdev_lvol.o 00:02:53.733 CC module/bdev/ftl/bdev_ftl.o 00:02:53.733 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:53.733 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:53.733 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:53.733 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:53.733 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:53.991 LIB libspdk_blobfs_bdev.a 00:02:53.991 LIB libspdk_bdev_gpt.a 00:02:53.991 LIB libspdk_bdev_error.a 00:02:53.991 LIB libspdk_bdev_split.a 00:02:53.991 LIB libspdk_bdev_null.a 00:02:53.991 LIB libspdk_bdev_passthru.a 00:02:53.991 LIB libspdk_bdev_ftl.a 00:02:53.991 LIB libspdk_bdev_zone_block.a 00:02:53.991 LIB libspdk_bdev_iscsi.a 00:02:53.991 LIB libspdk_bdev_malloc.a 00:02:53.991 LIB libspdk_bdev_aio.a 00:02:53.991 LIB libspdk_bdev_delay.a 00:02:53.991 LIB libspdk_bdev_lvol.a 00:02:54.250 LIB libspdk_bdev_virtio.a 00:02:54.250 LIB libspdk_bdev_raid.a 00:02:55.187 LIB libspdk_bdev_nvme.a 00:02:55.756 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:55.756 CC module/event/subsystems/keyring/keyring.o 00:02:55.756 CC module/event/subsystems/iobuf/iobuf.o 00:02:55.756 CC module/event/subsystems/vmd/vmd.o 00:02:55.756 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:55.756 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:55.756 CC module/event/subsystems/scheduler/scheduler.o 00:02:55.756 CC module/event/subsystems/sock/sock.o 00:02:55.756 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:55.756 CC module/event/subsystems/fsdev/fsdev.o 00:02:55.756 LIB libspdk_event_keyring.a 00:02:55.756 LIB libspdk_event_vhost_blk.a 00:02:55.756 LIB libspdk_event_scheduler.a 00:02:55.756 LIB libspdk_event_sock.a 00:02:55.756 LIB libspdk_event_vmd.a 00:02:55.756 LIB libspdk_event_iobuf.a 00:02:56.015 LIB libspdk_event_vfu_tgt.a 00:02:56.015 LIB libspdk_event_fsdev.a 00:02:56.285 CC module/event/subsystems/accel/accel.o 00:02:56.285 LIB libspdk_event_accel.a 00:02:56.543 CC module/event/subsystems/bdev/bdev.o 00:02:56.803 LIB libspdk_event_bdev.a 00:02:57.062 CC module/event/subsystems/nbd/nbd.o 00:02:57.062 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:57.062 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:57.062 CC module/event/subsystems/scsi/scsi.o 00:02:57.062 CC module/event/subsystems/ublk/ublk.o 00:02:57.062 LIB libspdk_event_nbd.a 00:02:57.062 LIB libspdk_event_ublk.a 00:02:57.062 LIB libspdk_event_scsi.a 00:02:57.321 LIB libspdk_event_nvmf.a 00:02:57.581 CC module/event/subsystems/iscsi/iscsi.o 00:02:57.581 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:57.581 LIB libspdk_event_vhost_scsi.a 00:02:57.581 LIB libspdk_event_iscsi.a 00:02:57.840 TEST_HEADER include/spdk/accel_module.h 00:02:57.840 CC app/trace_record/trace_record.o 00:02:57.840 TEST_HEADER include/spdk/accel.h 00:02:57.840 TEST_HEADER include/spdk/assert.h 00:02:57.840 TEST_HEADER include/spdk/bdev.h 00:02:57.840 TEST_HEADER include/spdk/barrier.h 00:02:57.840 TEST_HEADER include/spdk/bdev_module.h 00:02:57.840 CC app/spdk_nvme_discover/discovery_aer.o 00:02:57.840 TEST_HEADER include/spdk/base64.h 00:02:57.840 TEST_HEADER include/spdk/bit_array.h 00:02:57.840 TEST_HEADER include/spdk/bdev_zone.h 00:02:57.840 TEST_HEADER include/spdk/bit_pool.h 00:02:57.840 TEST_HEADER include/spdk/blob_bdev.h 00:02:57.840 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:57.840 CC app/spdk_nvme_identify/identify.o 00:02:57.840 TEST_HEADER include/spdk/config.h 00:02:57.840 TEST_HEADER include/spdk/blobfs.h 00:02:57.840 TEST_HEADER include/spdk/conf.h 00:02:57.840 TEST_HEADER include/spdk/blob.h 00:02:57.840 TEST_HEADER include/spdk/cpuset.h 00:02:57.840 TEST_HEADER include/spdk/crc16.h 00:02:57.840 TEST_HEADER include/spdk/crc32.h 00:02:57.840 TEST_HEADER include/spdk/crc64.h 00:02:57.840 TEST_HEADER include/spdk/dif.h 00:02:57.840 TEST_HEADER include/spdk/dma.h 00:02:57.840 TEST_HEADER include/spdk/endian.h 00:02:57.840 CXX app/trace/trace.o 00:02:57.840 CC app/spdk_lspci/spdk_lspci.o 00:02:57.840 TEST_HEADER include/spdk/env.h 00:02:57.840 TEST_HEADER include/spdk/env_dpdk.h 00:02:57.840 TEST_HEADER include/spdk/event.h 00:02:57.840 CC app/spdk_top/spdk_top.o 00:02:57.840 TEST_HEADER include/spdk/fd_group.h 00:02:57.840 TEST_HEADER include/spdk/file.h 00:02:57.840 TEST_HEADER include/spdk/fd.h 00:02:57.840 TEST_HEADER include/spdk/fsdev.h 00:02:57.840 TEST_HEADER include/spdk/fsdev_module.h 00:02:57.840 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:57.840 TEST_HEADER include/spdk/ftl.h 00:02:57.840 TEST_HEADER include/spdk/gpt_spec.h 00:02:57.840 TEST_HEADER include/spdk/idxd.h 00:02:57.840 TEST_HEADER include/spdk/hexlify.h 00:02:57.840 CC app/spdk_nvme_perf/perf.o 00:02:57.840 TEST_HEADER include/spdk/histogram_data.h 00:02:57.840 TEST_HEADER include/spdk/ioat_spec.h 00:02:57.840 TEST_HEADER include/spdk/ioat.h 00:02:57.840 TEST_HEADER include/spdk/init.h 00:02:57.840 TEST_HEADER include/spdk/idxd_spec.h 00:02:57.840 CC app/nvmf_tgt/nvmf_main.o 00:02:57.840 TEST_HEADER include/spdk/json.h 00:02:57.840 TEST_HEADER include/spdk/iscsi_spec.h 00:02:57.840 TEST_HEADER include/spdk/jsonrpc.h 00:02:57.840 TEST_HEADER include/spdk/keyring.h 00:02:57.840 TEST_HEADER include/spdk/keyring_module.h 00:02:57.840 TEST_HEADER include/spdk/likely.h 00:02:57.840 TEST_HEADER include/spdk/lvol.h 00:02:57.840 TEST_HEADER include/spdk/log.h 00:02:57.840 TEST_HEADER include/spdk/mmio.h 00:02:57.840 CC test/rpc_client/rpc_client_test.o 00:02:57.840 TEST_HEADER include/spdk/md5.h 00:02:57.840 TEST_HEADER include/spdk/memory.h 00:02:57.840 TEST_HEADER include/spdk/nbd.h 00:02:57.841 TEST_HEADER include/spdk/net.h 00:02:57.841 TEST_HEADER include/spdk/nvme.h 00:02:57.841 TEST_HEADER include/spdk/notify.h 00:02:57.841 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:57.841 TEST_HEADER include/spdk/nvme_intel.h 00:02:57.841 TEST_HEADER include/spdk/nvme_zns.h 00:02:57.841 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:57.841 TEST_HEADER include/spdk/nvme_spec.h 00:02:57.841 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:57.841 TEST_HEADER include/spdk/nvmf.h 00:02:57.841 CC app/iscsi_tgt/iscsi_tgt.o 00:02:57.841 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:57.841 TEST_HEADER include/spdk/nvmf_transport.h 00:02:57.841 TEST_HEADER include/spdk/nvmf_spec.h 00:02:57.841 TEST_HEADER include/spdk/opal.h 00:02:57.841 TEST_HEADER include/spdk/pci_ids.h 00:02:57.841 CC app/spdk_dd/spdk_dd.o 00:02:57.841 TEST_HEADER include/spdk/opal_spec.h 00:02:57.841 TEST_HEADER include/spdk/pipe.h 00:02:57.841 TEST_HEADER include/spdk/queue.h 00:02:57.841 TEST_HEADER include/spdk/rpc.h 00:02:57.841 TEST_HEADER include/spdk/reduce.h 00:02:57.841 TEST_HEADER include/spdk/scsi.h 00:02:57.841 TEST_HEADER include/spdk/scsi_spec.h 00:02:57.841 TEST_HEADER include/spdk/scheduler.h 00:02:57.841 TEST_HEADER include/spdk/stdinc.h 00:02:57.841 TEST_HEADER include/spdk/sock.h 00:02:57.841 TEST_HEADER include/spdk/string.h 00:02:57.841 TEST_HEADER include/spdk/thread.h 00:02:57.841 TEST_HEADER include/spdk/tree.h 00:02:57.841 TEST_HEADER include/spdk/trace.h 00:02:57.841 TEST_HEADER include/spdk/trace_parser.h 00:02:57.841 TEST_HEADER include/spdk/ublk.h 00:02:58.108 TEST_HEADER include/spdk/uuid.h 00:02:58.108 TEST_HEADER include/spdk/util.h 00:02:58.108 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:58.108 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:58.108 TEST_HEADER include/spdk/version.h 00:02:58.108 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:58.108 TEST_HEADER include/spdk/vhost.h 00:02:58.108 TEST_HEADER include/spdk/xor.h 00:02:58.108 TEST_HEADER include/spdk/vmd.h 00:02:58.108 TEST_HEADER include/spdk/zipf.h 00:02:58.108 CXX test/cpp_headers/accel_module.o 00:02:58.108 CXX test/cpp_headers/accel.o 00:02:58.108 CXX test/cpp_headers/assert.o 00:02:58.108 CXX test/cpp_headers/base64.o 00:02:58.108 CXX test/cpp_headers/bdev.o 00:02:58.108 CXX test/cpp_headers/bdev_module.o 00:02:58.108 CXX test/cpp_headers/bdev_zone.o 00:02:58.108 CXX test/cpp_headers/barrier.o 00:02:58.108 CXX test/cpp_headers/bit_array.o 00:02:58.108 CXX test/cpp_headers/blob_bdev.o 00:02:58.108 CXX test/cpp_headers/bit_pool.o 00:02:58.108 CXX test/cpp_headers/blobfs_bdev.o 00:02:58.108 CXX test/cpp_headers/blobfs.o 00:02:58.108 CXX test/cpp_headers/blob.o 00:02:58.108 CXX test/cpp_headers/conf.o 00:02:58.108 CXX test/cpp_headers/config.o 00:02:58.108 CXX test/cpp_headers/cpuset.o 00:02:58.108 CC app/spdk_tgt/spdk_tgt.o 00:02:58.108 CXX test/cpp_headers/crc16.o 00:02:58.108 CXX test/cpp_headers/crc32.o 00:02:58.108 CXX test/cpp_headers/crc64.o 00:02:58.108 CXX test/cpp_headers/dif.o 00:02:58.108 CXX test/cpp_headers/dma.o 00:02:58.108 CXX test/cpp_headers/env_dpdk.o 00:02:58.108 CXX test/cpp_headers/endian.o 00:02:58.108 CXX test/cpp_headers/env.o 00:02:58.108 CXX test/cpp_headers/event.o 00:02:58.108 CXX test/cpp_headers/fd_group.o 00:02:58.108 CXX test/cpp_headers/fd.o 00:02:58.108 CXX test/cpp_headers/file.o 00:02:58.108 CXX test/cpp_headers/fsdev_module.o 00:02:58.108 CXX test/cpp_headers/ftl.o 00:02:58.108 CXX test/cpp_headers/fsdev.o 00:02:58.108 CXX test/cpp_headers/gpt_spec.o 00:02:58.108 CXX test/cpp_headers/fuse_dispatcher.o 00:02:58.108 CXX test/cpp_headers/idxd_spec.o 00:02:58.108 CXX test/cpp_headers/hexlify.o 00:02:58.108 CXX test/cpp_headers/idxd.o 00:02:58.108 CXX test/cpp_headers/histogram_data.o 00:02:58.108 CXX test/cpp_headers/ioat_spec.o 00:02:58.108 CXX test/cpp_headers/ioat.o 00:02:58.108 CXX test/cpp_headers/init.o 00:02:58.108 CXX test/cpp_headers/json.o 00:02:58.108 CXX test/cpp_headers/jsonrpc.o 00:02:58.108 CXX test/cpp_headers/keyring_module.o 00:02:58.108 CXX test/cpp_headers/keyring.o 00:02:58.108 CXX test/cpp_headers/iscsi_spec.o 00:02:58.108 CXX test/cpp_headers/log.o 00:02:58.108 CXX test/cpp_headers/likely.o 00:02:58.108 CXX test/cpp_headers/lvol.o 00:02:58.108 CXX test/cpp_headers/memory.o 00:02:58.108 CXX test/cpp_headers/md5.o 00:02:58.108 CXX test/cpp_headers/mmio.o 00:02:58.108 CXX test/cpp_headers/nbd.o 00:02:58.108 CXX test/cpp_headers/net.o 00:02:58.108 CXX test/cpp_headers/notify.o 00:02:58.108 CXX test/cpp_headers/nvme.o 00:02:58.108 CXX test/cpp_headers/nvme_intel.o 00:02:58.108 CXX test/cpp_headers/nvme_ocssd.o 00:02:58.108 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:58.108 CXX test/cpp_headers/nvme_spec.o 00:02:58.108 CXX test/cpp_headers/nvmf_cmd.o 00:02:58.108 CXX test/cpp_headers/nvme_zns.o 00:02:58.108 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:58.108 CXX test/cpp_headers/nvmf.o 00:02:58.108 CXX test/cpp_headers/nvmf_spec.o 00:02:58.108 CXX test/cpp_headers/nvmf_transport.o 00:02:58.108 CXX test/cpp_headers/opal.o 00:02:58.108 CXX test/cpp_headers/opal_spec.o 00:02:58.108 CXX test/cpp_headers/pipe.o 00:02:58.108 CXX test/cpp_headers/pci_ids.o 00:02:58.108 CXX test/cpp_headers/queue.o 00:02:58.108 CXX test/cpp_headers/reduce.o 00:02:58.108 CXX test/cpp_headers/rpc.o 00:02:58.108 CXX test/cpp_headers/scheduler.o 00:02:58.108 CXX test/cpp_headers/scsi.o 00:02:58.108 CXX test/cpp_headers/scsi_spec.o 00:02:58.108 CC examples/ioat/perf/perf.o 00:02:58.108 CXX test/cpp_headers/sock.o 00:02:58.108 CXX test/cpp_headers/stdinc.o 00:02:58.108 CXX test/cpp_headers/string.o 00:02:58.108 CXX test/cpp_headers/thread.o 00:02:58.108 CXX test/cpp_headers/trace.o 00:02:58.108 CXX test/cpp_headers/trace_parser.o 00:02:58.108 CC test/env/vtophys/vtophys.o 00:02:58.108 CC test/app/stub/stub.o 00:02:58.108 CC examples/util/zipf/zipf.o 00:02:58.108 CC test/app/jsoncat/jsoncat.o 00:02:58.108 CC examples/ioat/verify/verify.o 00:02:58.108 CC app/fio/nvme/fio_plugin.o 00:02:58.108 CC test/thread/poller_perf/poller_perf.o 00:02:58.108 CC test/app/histogram_perf/histogram_perf.o 00:02:58.108 CC test/thread/lock/spdk_lock.o 00:02:58.108 CC test/env/pci/pci_ut.o 00:02:58.108 CC test/env/memory/memory_ut.o 00:02:58.108 LINK spdk_lspci 00:02:58.108 CXX test/cpp_headers/tree.o 00:02:58.108 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:58.108 CXX test/cpp_headers/ublk.o 00:02:58.108 LINK spdk_nvme_discover 00:02:58.108 CC test/dma/test_dma/test_dma.o 00:02:58.108 CC test/app/bdev_svc/bdev_svc.o 00:02:58.108 CC app/fio/bdev/fio_plugin.o 00:02:58.108 LINK rpc_client_test 00:02:58.108 LINK nvmf_tgt 00:02:58.108 CC test/env/mem_callbacks/mem_callbacks.o 00:02:58.108 LINK spdk_trace_record 00:02:58.108 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:58.108 LINK interrupt_tgt 00:02:58.108 CXX test/cpp_headers/util.o 00:02:58.108 CXX test/cpp_headers/uuid.o 00:02:58.108 CXX test/cpp_headers/version.o 00:02:58.108 CXX test/cpp_headers/vfio_user_pci.o 00:02:58.368 CXX test/cpp_headers/vfio_user_spec.o 00:02:58.368 CXX test/cpp_headers/vhost.o 00:02:58.368 CXX test/cpp_headers/vmd.o 00:02:58.368 CXX test/cpp_headers/xor.o 00:02:58.368 CXX test/cpp_headers/zipf.o 00:02:58.368 LINK iscsi_tgt 00:02:58.368 LINK jsoncat 00:02:58.368 LINK vtophys 00:02:58.368 LINK poller_perf 00:02:58.368 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:58.368 LINK zipf 00:02:58.368 LINK histogram_perf 00:02:58.368 LINK env_dpdk_post_init 00:02:58.368 LINK spdk_tgt 00:02:58.368 LINK verify 00:02:58.368 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:58.368 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:58.368 LINK ioat_perf 00:02:58.368 LINK stub 00:02:58.368 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:58.368 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:58.368 LINK spdk_trace 00:02:58.369 LINK bdev_svc 00:02:58.369 LINK spdk_dd 00:02:58.369 LINK pci_ut 00:02:58.627 LINK nvme_fuzz 00:02:58.627 LINK test_dma 00:02:58.627 LINK spdk_nvme_identify 00:02:58.627 LINK spdk_nvme 00:02:58.627 LINK spdk_nvme_perf 00:02:58.627 LINK llvm_vfio_fuzz 00:02:58.627 LINK spdk_bdev 00:02:58.627 LINK vhost_fuzz 00:02:58.627 LINK mem_callbacks 00:02:58.627 LINK spdk_top 00:02:58.886 CC examples/vmd/lsvmd/lsvmd.o 00:02:58.886 CC examples/vmd/led/led.o 00:02:58.886 CC examples/idxd/perf/perf.o 00:02:58.886 LINK llvm_nvme_fuzz 00:02:58.886 CC app/vhost/vhost.o 00:02:58.886 CC examples/sock/hello_world/hello_sock.o 00:02:58.886 CC examples/thread/thread/thread_ex.o 00:02:58.886 LINK led 00:02:58.886 LINK lsvmd 00:02:58.886 LINK memory_ut 00:02:59.146 LINK vhost 00:02:59.146 LINK hello_sock 00:02:59.146 LINK idxd_perf 00:02:59.146 LINK thread 00:02:59.146 LINK spdk_lock 00:02:59.406 LINK iscsi_fuzz 00:02:59.665 CC examples/nvme/reconnect/reconnect.o 00:02:59.665 CC examples/nvme/hello_world/hello_world.o 00:02:59.665 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:59.665 CC examples/nvme/abort/abort.o 00:02:59.665 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:59.665 CC examples/nvme/arbitration/arbitration.o 00:02:59.665 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:59.665 CC examples/nvme/hotplug/hotplug.o 00:02:59.923 CC test/event/reactor/reactor.o 00:02:59.923 CC test/event/event_perf/event_perf.o 00:02:59.923 CC test/event/reactor_perf/reactor_perf.o 00:02:59.923 CC test/event/app_repeat/app_repeat.o 00:02:59.923 CC test/event/scheduler/scheduler.o 00:02:59.923 LINK pmr_persistence 00:02:59.923 LINK cmb_copy 00:02:59.923 LINK hello_world 00:02:59.923 LINK hotplug 00:02:59.923 LINK event_perf 00:02:59.923 LINK reactor_perf 00:02:59.923 LINK reactor 00:02:59.923 LINK reconnect 00:02:59.923 LINK app_repeat 00:02:59.923 LINK abort 00:02:59.923 LINK arbitration 00:02:59.923 LINK nvme_manage 00:02:59.923 LINK scheduler 00:03:00.181 CC test/nvme/boot_partition/boot_partition.o 00:03:00.181 CC test/nvme/e2edp/nvme_dp.o 00:03:00.181 CC test/nvme/fused_ordering/fused_ordering.o 00:03:00.181 CC test/nvme/overhead/overhead.o 00:03:00.181 CC test/nvme/aer/aer.o 00:03:00.181 CC test/nvme/startup/startup.o 00:03:00.181 CC test/nvme/reset/reset.o 00:03:00.181 CC test/nvme/cuse/cuse.o 00:03:00.181 CC test/nvme/connect_stress/connect_stress.o 00:03:00.181 CC test/nvme/sgl/sgl.o 00:03:00.181 CC test/nvme/simple_copy/simple_copy.o 00:03:00.181 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:00.181 CC test/nvme/fdp/fdp.o 00:03:00.181 CC test/nvme/compliance/nvme_compliance.o 00:03:00.181 CC test/blobfs/mkfs/mkfs.o 00:03:00.181 CC test/nvme/reserve/reserve.o 00:03:00.181 CC test/nvme/err_injection/err_injection.o 00:03:00.181 CC test/accel/dif/dif.o 00:03:00.181 CC test/lvol/esnap/esnap.o 00:03:00.181 LINK boot_partition 00:03:00.181 LINK startup 00:03:00.181 LINK fused_ordering 00:03:00.181 LINK connect_stress 00:03:00.181 LINK doorbell_aers 00:03:00.439 LINK err_injection 00:03:00.439 LINK reserve 00:03:00.439 LINK simple_copy 00:03:00.439 LINK mkfs 00:03:00.439 LINK nvme_dp 00:03:00.440 LINK reset 00:03:00.440 LINK aer 00:03:00.440 LINK overhead 00:03:00.440 LINK sgl 00:03:00.440 LINK fdp 00:03:00.440 LINK nvme_compliance 00:03:00.698 LINK dif 00:03:00.698 CC examples/accel/perf/accel_perf.o 00:03:00.698 CC examples/blob/cli/blobcli.o 00:03:00.698 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:00.698 CC examples/blob/hello_world/hello_blob.o 00:03:00.956 LINK hello_fsdev 00:03:00.956 LINK hello_blob 00:03:00.956 LINK cuse 00:03:00.956 LINK accel_perf 00:03:00.957 LINK blobcli 00:03:01.892 CC examples/bdev/hello_world/hello_bdev.o 00:03:01.892 CC examples/bdev/bdevperf/bdevperf.o 00:03:01.892 LINK hello_bdev 00:03:02.150 CC test/bdev/bdevio/bdevio.o 00:03:02.150 LINK bdevperf 00:03:02.408 LINK bdevio 00:03:03.786 LINK esnap 00:03:03.786 CC examples/nvmf/nvmf/nvmf.o 00:03:03.786 LINK nvmf 00:03:05.180 00:03:05.180 real 0m45.949s 00:03:05.180 user 6m15.756s 00:03:05.180 sys 2m31.936s 00:03:05.180 15:02:30 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:05.180 15:02:30 make -- common/autotest_common.sh@10 -- $ set +x 00:03:05.180 ************************************ 00:03:05.180 END TEST make 00:03:05.180 ************************************ 00:03:05.180 15:02:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:05.180 15:02:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:05.180 15:02:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:05.180 15:02:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.180 15:02:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:05.180 15:02:30 -- pm/common@44 -- $ pid=2239270 00:03:05.180 15:02:30 -- pm/common@50 -- $ kill -TERM 2239270 00:03:05.180 15:02:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.180 15:02:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:05.180 15:02:30 -- pm/common@44 -- $ pid=2239272 00:03:05.180 15:02:30 -- pm/common@50 -- $ kill -TERM 2239272 00:03:05.180 15:02:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.180 15:02:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:05.180 15:02:30 -- pm/common@44 -- $ pid=2239274 00:03:05.180 15:02:30 -- pm/common@50 -- $ kill -TERM 2239274 00:03:05.180 15:02:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.180 15:02:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:05.180 15:02:30 -- pm/common@44 -- $ pid=2239297 00:03:05.180 15:02:30 -- pm/common@50 -- $ sudo -E kill -TERM 2239297 00:03:05.180 15:02:30 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:05.180 15:02:30 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:03:05.440 15:02:30 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:05.440 15:02:30 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:05.440 15:02:30 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:05.440 15:02:30 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:05.440 15:02:30 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:05.440 15:02:30 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:05.440 15:02:30 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:05.440 15:02:30 -- scripts/common.sh@336 -- # IFS=.-: 00:03:05.440 15:02:30 -- scripts/common.sh@336 -- # read -ra ver1 00:03:05.440 15:02:30 -- scripts/common.sh@337 -- # IFS=.-: 00:03:05.440 15:02:30 -- scripts/common.sh@337 -- # read -ra ver2 00:03:05.440 15:02:30 -- scripts/common.sh@338 -- # local 'op=<' 00:03:05.440 15:02:30 -- scripts/common.sh@340 -- # ver1_l=2 00:03:05.440 15:02:30 -- scripts/common.sh@341 -- # ver2_l=1 00:03:05.440 15:02:30 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:05.440 15:02:30 -- scripts/common.sh@344 -- # case "$op" in 00:03:05.440 15:02:30 -- scripts/common.sh@345 -- # : 1 00:03:05.440 15:02:30 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:05.440 15:02:30 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:05.440 15:02:30 -- scripts/common.sh@365 -- # decimal 1 00:03:05.440 15:02:30 -- scripts/common.sh@353 -- # local d=1 00:03:05.440 15:02:30 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:05.440 15:02:30 -- scripts/common.sh@355 -- # echo 1 00:03:05.440 15:02:30 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:05.440 15:02:30 -- scripts/common.sh@366 -- # decimal 2 00:03:05.441 15:02:30 -- scripts/common.sh@353 -- # local d=2 00:03:05.441 15:02:30 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:05.441 15:02:30 -- scripts/common.sh@355 -- # echo 2 00:03:05.441 15:02:30 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:05.441 15:02:30 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:05.441 15:02:30 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:05.441 15:02:30 -- scripts/common.sh@368 -- # return 0 00:03:05.441 15:02:30 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:05.441 15:02:30 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:05.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.441 --rc genhtml_branch_coverage=1 00:03:05.441 --rc genhtml_function_coverage=1 00:03:05.441 --rc genhtml_legend=1 00:03:05.441 --rc geninfo_all_blocks=1 00:03:05.441 --rc geninfo_unexecuted_blocks=1 00:03:05.441 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:05.441 ' 00:03:05.441 15:02:30 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:05.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.441 --rc genhtml_branch_coverage=1 00:03:05.441 --rc genhtml_function_coverage=1 00:03:05.441 --rc genhtml_legend=1 00:03:05.441 --rc geninfo_all_blocks=1 00:03:05.441 --rc geninfo_unexecuted_blocks=1 00:03:05.441 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:05.441 ' 00:03:05.441 15:02:30 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:05.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.441 --rc genhtml_branch_coverage=1 00:03:05.441 --rc genhtml_function_coverage=1 00:03:05.441 --rc genhtml_legend=1 00:03:05.441 --rc geninfo_all_blocks=1 00:03:05.441 --rc geninfo_unexecuted_blocks=1 00:03:05.441 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:05.441 ' 00:03:05.441 15:02:30 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:05.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.441 --rc genhtml_branch_coverage=1 00:03:05.441 --rc genhtml_function_coverage=1 00:03:05.441 --rc genhtml_legend=1 00:03:05.441 --rc geninfo_all_blocks=1 00:03:05.441 --rc geninfo_unexecuted_blocks=1 00:03:05.441 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:05.441 ' 00:03:05.441 15:02:30 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:03:05.441 15:02:30 -- nvmf/common.sh@7 -- # uname -s 00:03:05.441 15:02:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:05.441 15:02:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:05.441 15:02:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:05.441 15:02:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:05.441 15:02:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:05.441 15:02:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:05.441 15:02:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:05.441 15:02:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:05.441 15:02:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:05.441 15:02:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:05.441 15:02:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:03:05.441 15:02:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:03:05.441 15:02:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:05.441 15:02:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:05.441 15:02:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:05.441 15:02:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:05.441 15:02:30 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:05.441 15:02:30 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:05.441 15:02:30 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:05.441 15:02:30 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:05.441 15:02:30 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:05.441 15:02:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.441 15:02:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.441 15:02:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.441 15:02:30 -- paths/export.sh@5 -- # export PATH 00:03:05.441 15:02:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.441 15:02:30 -- nvmf/common.sh@51 -- # : 0 00:03:05.441 15:02:30 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:05.441 15:02:30 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:05.441 15:02:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:05.441 15:02:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:05.441 15:02:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:05.441 15:02:30 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:05.441 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:05.441 15:02:30 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:05.441 15:02:30 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:05.441 15:02:30 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:05.441 15:02:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:05.441 15:02:30 -- spdk/autotest.sh@32 -- # uname -s 00:03:05.441 15:02:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:05.441 15:02:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:05.441 15:02:30 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:05.441 15:02:30 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:05.441 15:02:30 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:05.441 15:02:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:05.441 15:02:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:05.441 15:02:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:05.441 15:02:30 -- spdk/autotest.sh@48 -- # udevadm_pid=2302312 00:03:05.441 15:02:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:05.441 15:02:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:05.441 15:02:30 -- pm/common@17 -- # local monitor 00:03:05.441 15:02:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.441 15:02:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.441 15:02:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.441 15:02:30 -- pm/common@21 -- # date +%s 00:03:05.441 15:02:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.441 15:02:30 -- pm/common@21 -- # date +%s 00:03:05.441 15:02:30 -- pm/common@25 -- # sleep 1 00:03:05.441 15:02:30 -- pm/common@21 -- # date +%s 00:03:05.441 15:02:30 -- pm/common@21 -- # date +%s 00:03:05.441 15:02:30 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732716150 00:03:05.441 15:02:30 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732716150 00:03:05.441 15:02:30 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732716150 00:03:05.441 15:02:30 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732716150 00:03:05.441 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732716150_collect-vmstat.pm.log 00:03:05.441 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732716150_collect-cpu-load.pm.log 00:03:05.441 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732716150_collect-cpu-temp.pm.log 00:03:05.441 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732716150_collect-bmc-pm.bmc.pm.log 00:03:06.379 15:02:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:06.379 15:02:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:06.379 15:02:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:06.379 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:03:06.379 15:02:31 -- spdk/autotest.sh@59 -- # create_test_list 00:03:06.379 15:02:31 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:06.379 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:03:06.638 15:02:31 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:03:06.638 15:02:31 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:06.638 15:02:31 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:06.638 15:02:31 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:03:06.638 15:02:31 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:06.638 15:02:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:06.638 15:02:31 -- common/autotest_common.sh@1457 -- # uname 00:03:06.638 15:02:31 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:06.638 15:02:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:06.638 15:02:31 -- common/autotest_common.sh@1477 -- # uname 00:03:06.638 15:02:31 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:06.638 15:02:31 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:06.638 15:02:31 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:03:06.638 lcov: LCOV version 1.15 00:03:06.638 15:02:31 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:03:13.209 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:14.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:03:22.702 15:02:47 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:22.702 15:02:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:22.702 15:02:47 -- common/autotest_common.sh@10 -- # set +x 00:03:22.702 15:02:47 -- spdk/autotest.sh@78 -- # rm -f 00:03:22.702 15:02:47 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:25.985 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:25.985 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:25.985 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:25.985 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:25.985 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:25.985 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:25.985 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:25.985 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:25.985 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:25.985 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:25.985 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:25.985 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:25.985 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:25.985 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:25.985 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:25.985 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:25.985 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:25.985 15:02:51 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:25.985 15:02:51 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:25.985 15:02:51 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:25.985 15:02:51 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:25.985 15:02:51 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:25.985 15:02:51 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:25.985 15:02:51 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:25.985 15:02:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:25.985 15:02:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:25.985 15:02:51 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:25.985 15:02:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:25.985 15:02:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:25.985 15:02:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:25.985 15:02:51 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:25.985 15:02:51 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:26.244 No valid GPT data, bailing 00:03:26.244 15:02:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:26.244 15:02:51 -- scripts/common.sh@394 -- # pt= 00:03:26.244 15:02:51 -- scripts/common.sh@395 -- # return 1 00:03:26.244 15:02:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:26.244 1+0 records in 00:03:26.244 1+0 records out 00:03:26.244 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00199946 s, 524 MB/s 00:03:26.244 15:02:51 -- spdk/autotest.sh@105 -- # sync 00:03:26.244 15:02:51 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:26.244 15:02:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:26.244 15:02:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:34.364 15:02:58 -- spdk/autotest.sh@111 -- # uname -s 00:03:34.364 15:02:58 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:34.364 15:02:58 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:03:34.364 15:02:58 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:34.364 15:02:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.364 15:02:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.364 15:02:58 -- common/autotest_common.sh@10 -- # set +x 00:03:34.364 ************************************ 00:03:34.364 START TEST setup.sh 00:03:34.364 ************************************ 00:03:34.364 15:02:58 setup.sh -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:34.364 * Looking for test storage... 00:03:34.364 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:34.364 15:02:58 setup.sh -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:34.364 15:02:58 setup.sh -- common/autotest_common.sh@1693 -- # lcov --version 00:03:34.364 15:02:58 setup.sh -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:34.364 15:02:59 setup.sh -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@345 -- # : 1 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@353 -- # local d=1 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@355 -- # echo 1 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@353 -- # local d=2 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@355 -- # echo 2 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:34.364 15:02:59 setup.sh -- scripts/common.sh@368 -- # return 0 00:03:34.364 15:02:59 setup.sh -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:34.364 15:02:59 setup.sh -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:34.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.364 --rc genhtml_branch_coverage=1 00:03:34.364 --rc genhtml_function_coverage=1 00:03:34.364 --rc genhtml_legend=1 00:03:34.364 --rc geninfo_all_blocks=1 00:03:34.364 --rc geninfo_unexecuted_blocks=1 00:03:34.364 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:34.364 ' 00:03:34.364 15:02:59 setup.sh -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:34.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.364 --rc genhtml_branch_coverage=1 00:03:34.364 --rc genhtml_function_coverage=1 00:03:34.364 --rc genhtml_legend=1 00:03:34.364 --rc geninfo_all_blocks=1 00:03:34.364 --rc geninfo_unexecuted_blocks=1 00:03:34.364 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:34.364 ' 00:03:34.364 15:02:59 setup.sh -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:34.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.364 --rc genhtml_branch_coverage=1 00:03:34.364 --rc genhtml_function_coverage=1 00:03:34.364 --rc genhtml_legend=1 00:03:34.364 --rc geninfo_all_blocks=1 00:03:34.364 --rc geninfo_unexecuted_blocks=1 00:03:34.364 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:34.364 ' 00:03:34.364 15:02:59 setup.sh -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:34.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.364 --rc genhtml_branch_coverage=1 00:03:34.364 --rc genhtml_function_coverage=1 00:03:34.364 --rc genhtml_legend=1 00:03:34.364 --rc geninfo_all_blocks=1 00:03:34.364 --rc geninfo_unexecuted_blocks=1 00:03:34.364 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:34.364 ' 00:03:34.364 15:02:59 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:34.364 15:02:59 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:34.364 15:02:59 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:34.364 15:02:59 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.364 15:02:59 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.364 15:02:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:34.364 ************************************ 00:03:34.364 START TEST acl 00:03:34.364 ************************************ 00:03:34.364 15:02:59 setup.sh.acl -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:34.364 * Looking for test storage... 00:03:34.364 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:34.364 15:02:59 setup.sh.acl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:34.364 15:02:59 setup.sh.acl -- common/autotest_common.sh@1693 -- # lcov --version 00:03:34.364 15:02:59 setup.sh.acl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:34.364 15:02:59 setup.sh.acl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:03:34.364 15:02:59 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:34.365 15:02:59 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:03:34.365 15:02:59 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:03:34.365 15:02:59 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:34.365 15:02:59 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:34.365 15:02:59 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:03:34.365 15:02:59 setup.sh.acl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:34.365 15:02:59 setup.sh.acl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:34.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.365 --rc genhtml_branch_coverage=1 00:03:34.365 --rc genhtml_function_coverage=1 00:03:34.365 --rc genhtml_legend=1 00:03:34.365 --rc geninfo_all_blocks=1 00:03:34.365 --rc geninfo_unexecuted_blocks=1 00:03:34.365 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:34.365 ' 00:03:34.365 15:02:59 setup.sh.acl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:34.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.365 --rc genhtml_branch_coverage=1 00:03:34.365 --rc genhtml_function_coverage=1 00:03:34.365 --rc genhtml_legend=1 00:03:34.365 --rc geninfo_all_blocks=1 00:03:34.365 --rc geninfo_unexecuted_blocks=1 00:03:34.365 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:34.365 ' 00:03:34.365 15:02:59 setup.sh.acl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:34.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.365 --rc genhtml_branch_coverage=1 00:03:34.365 --rc genhtml_function_coverage=1 00:03:34.365 --rc genhtml_legend=1 00:03:34.365 --rc geninfo_all_blocks=1 00:03:34.365 --rc geninfo_unexecuted_blocks=1 00:03:34.365 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:34.365 ' 00:03:34.365 15:02:59 setup.sh.acl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:34.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.365 --rc genhtml_branch_coverage=1 00:03:34.365 --rc genhtml_function_coverage=1 00:03:34.365 --rc genhtml_legend=1 00:03:34.365 --rc geninfo_all_blocks=1 00:03:34.365 --rc geninfo_unexecuted_blocks=1 00:03:34.365 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:34.365 ' 00:03:34.365 15:02:59 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:34.365 15:02:59 setup.sh.acl -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:34.365 15:02:59 setup.sh.acl -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:34.365 15:02:59 setup.sh.acl -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:34.365 15:02:59 setup.sh.acl -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:34.365 15:02:59 setup.sh.acl -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:34.365 15:02:59 setup.sh.acl -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:34.365 15:02:59 setup.sh.acl -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:34.365 15:02:59 setup.sh.acl -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:34.365 15:02:59 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:34.365 15:02:59 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:34.365 15:02:59 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:34.365 15:02:59 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:34.365 15:02:59 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:34.365 15:02:59 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:34.365 15:02:59 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.650 15:03:02 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:37.650 15:03:02 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:37.650 15:03:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.931 15:03:02 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:37.931 15:03:02 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.931 15:03:02 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:41.263 Hugepages 00:03:41.263 node hugesize free / total 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.263 00:03:41.263 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.263 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:41.264 15:03:06 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:41.264 15:03:06 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.264 15:03:06 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.264 15:03:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:41.264 ************************************ 00:03:41.264 START TEST denied 00:03:41.264 ************************************ 00:03:41.264 15:03:06 setup.sh.acl.denied -- common/autotest_common.sh@1129 -- # denied 00:03:41.264 15:03:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:41.264 15:03:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:41.264 15:03:06 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:41.264 15:03:06 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.264 15:03:06 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:45.456 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:45.456 15:03:10 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:45.456 15:03:10 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:45.456 15:03:10 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:45.456 15:03:10 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:45.456 15:03:10 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:45.456 15:03:10 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:45.456 15:03:10 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:45.456 15:03:10 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:45.456 15:03:10 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:45.456 15:03:10 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.639 00:03:49.639 real 0m8.226s 00:03:49.639 user 0m2.673s 00:03:49.639 sys 0m4.921s 00:03:49.639 15:03:14 setup.sh.acl.denied -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.639 15:03:14 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:49.639 ************************************ 00:03:49.639 END TEST denied 00:03:49.639 ************************************ 00:03:49.639 15:03:14 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:49.639 15:03:14 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.639 15:03:14 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.639 15:03:14 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:49.639 ************************************ 00:03:49.639 START TEST allowed 00:03:49.639 ************************************ 00:03:49.639 15:03:14 setup.sh.acl.allowed -- common/autotest_common.sh@1129 -- # allowed 00:03:49.639 15:03:14 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:49.639 15:03:14 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:49.639 15:03:14 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:49.639 15:03:14 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.639 15:03:14 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:54.902 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:54.902 15:03:19 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:54.902 15:03:19 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:54.902 15:03:19 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:54.902 15:03:19 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.902 15:03:19 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.184 00:03:58.184 real 0m8.174s 00:03:58.184 user 0m2.241s 00:03:58.184 sys 0m4.447s 00:03:58.184 15:03:23 setup.sh.acl.allowed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.184 15:03:23 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:58.184 ************************************ 00:03:58.184 END TEST allowed 00:03:58.184 ************************************ 00:03:58.184 00:03:58.184 real 0m24.024s 00:03:58.184 user 0m7.562s 00:03:58.184 sys 0m14.640s 00:03:58.184 15:03:23 setup.sh.acl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.184 15:03:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:58.184 ************************************ 00:03:58.184 END TEST acl 00:03:58.184 ************************************ 00:03:58.184 15:03:23 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:58.184 15:03:23 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.184 15:03:23 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.184 15:03:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:58.184 ************************************ 00:03:58.184 START TEST hugepages 00:03:58.184 ************************************ 00:03:58.184 15:03:23 setup.sh.hugepages -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:58.184 * Looking for test storage... 00:03:58.184 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:58.184 15:03:23 setup.sh.hugepages -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:58.184 15:03:23 setup.sh.hugepages -- common/autotest_common.sh@1693 -- # lcov --version 00:03:58.184 15:03:23 setup.sh.hugepages -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:58.184 15:03:23 setup.sh.hugepages -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:58.184 15:03:23 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:03:58.184 15:03:23 setup.sh.hugepages -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.184 15:03:23 setup.sh.hugepages -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:58.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.184 --rc genhtml_branch_coverage=1 00:03:58.184 --rc genhtml_function_coverage=1 00:03:58.184 --rc genhtml_legend=1 00:03:58.184 --rc geninfo_all_blocks=1 00:03:58.184 --rc geninfo_unexecuted_blocks=1 00:03:58.184 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:58.184 ' 00:03:58.184 15:03:23 setup.sh.hugepages -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:58.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.184 --rc genhtml_branch_coverage=1 00:03:58.184 --rc genhtml_function_coverage=1 00:03:58.184 --rc genhtml_legend=1 00:03:58.184 --rc geninfo_all_blocks=1 00:03:58.184 --rc geninfo_unexecuted_blocks=1 00:03:58.184 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:58.184 ' 00:03:58.184 15:03:23 setup.sh.hugepages -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:58.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.184 --rc genhtml_branch_coverage=1 00:03:58.184 --rc genhtml_function_coverage=1 00:03:58.184 --rc genhtml_legend=1 00:03:58.184 --rc geninfo_all_blocks=1 00:03:58.184 --rc geninfo_unexecuted_blocks=1 00:03:58.184 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:58.184 ' 00:03:58.184 15:03:23 setup.sh.hugepages -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:58.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.184 --rc genhtml_branch_coverage=1 00:03:58.184 --rc genhtml_function_coverage=1 00:03:58.184 --rc genhtml_legend=1 00:03:58.184 --rc geninfo_all_blocks=1 00:03:58.184 --rc geninfo_unexecuted_blocks=1 00:03:58.184 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:58.184 ' 00:03:58.184 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:58.184 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:58.184 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:58.184 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:58.184 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:58.184 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:58.184 15:03:23 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:58.184 15:03:23 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:58.184 15:03:23 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:58.184 15:03:23 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:58.184 15:03:23 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.184 15:03:23 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.184 15:03:23 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.184 15:03:23 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.184 15:03:23 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.184 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.184 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41063920 kB' 'MemAvailable: 42685200 kB' 'Buffers: 6784 kB' 'Cached: 9650248 kB' 'SwapCached: 76 kB' 'Active: 7080744 kB' 'Inactive: 3191692 kB' 'Active(anon): 6173416 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618880 kB' 'Mapped: 157324 kB' 'Shmem: 7909060 kB' 'KReclaimable: 570108 kB' 'Slab: 1571976 kB' 'SReclaimable: 570108 kB' 'SUnreclaim: 1001868 kB' 'KernelStack: 21904 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36433348 kB' 'Committed_AS: 10460968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217940 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.185 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:58.186 15:03:23 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:03:58.186 15:03:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.186 15:03:23 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.186 15:03:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.186 ************************************ 00:03:58.186 START TEST single_node_setup 00:03:58.186 ************************************ 00:03:58.186 15:03:23 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1129 -- # single_node_setup 00:03:58.186 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:03:58.186 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:03:58.186 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:58.186 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:03:58.186 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:58.186 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:03:58.186 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:58.186 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:58.186 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:58.186 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:58.186 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:03:58.186 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:58.186 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:58.187 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:58.187 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:58.187 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:58.187 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:58.187 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:58.187 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:03:58.187 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:03:58.187 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:03:58.187 15:03:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:03:58.187 15:03:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.187 15:03:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:01.463 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:01.463 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:01.463 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:01.463 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:01.463 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:01.463 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:01.463 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:01.463 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:01.463 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:01.463 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:01.463 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:01.721 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:01.721 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:01.721 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:01.721 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:01.721 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:03.247 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43214624 kB' 'MemAvailable: 44835896 kB' 'Buffers: 6784 kB' 'Cached: 9650384 kB' 'SwapCached: 76 kB' 'Active: 7083248 kB' 'Inactive: 3191692 kB' 'Active(anon): 6175920 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621100 kB' 'Mapped: 157456 kB' 'Shmem: 7909196 kB' 'KReclaimable: 570100 kB' 'Slab: 1570388 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000288 kB' 'KernelStack: 21936 kB' 'PageTables: 9252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10461624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218084 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.247 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43218244 kB' 'MemAvailable: 44839516 kB' 'Buffers: 6784 kB' 'Cached: 9650388 kB' 'SwapCached: 76 kB' 'Active: 7083804 kB' 'Inactive: 3191692 kB' 'Active(anon): 6176476 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621492 kB' 'Mapped: 157372 kB' 'Shmem: 7909200 kB' 'KReclaimable: 570100 kB' 'Slab: 1570300 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000200 kB' 'KernelStack: 22112 kB' 'PageTables: 9520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10461448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218084 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.250 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.514 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.514 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.514 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43235804 kB' 'MemAvailable: 44857076 kB' 'Buffers: 6784 kB' 'Cached: 9650388 kB' 'SwapCached: 76 kB' 'Active: 7083136 kB' 'Inactive: 3191692 kB' 'Active(anon): 6175808 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620820 kB' 'Mapped: 157372 kB' 'Shmem: 7909200 kB' 'KReclaimable: 570100 kB' 'Slab: 1570284 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000184 kB' 'KernelStack: 21968 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10461432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217988 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.515 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.516 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:03.517 nr_hugepages=1024 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:03.517 resv_hugepages=0 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:03.517 surplus_hugepages=0 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:03.517 anon_hugepages=0 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43237172 kB' 'MemAvailable: 44858444 kB' 'Buffers: 6784 kB' 'Cached: 9650408 kB' 'SwapCached: 76 kB' 'Active: 7083196 kB' 'Inactive: 3191692 kB' 'Active(anon): 6175868 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620832 kB' 'Mapped: 157372 kB' 'Shmem: 7909220 kB' 'KReclaimable: 570100 kB' 'Slab: 1570284 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000184 kB' 'KernelStack: 21936 kB' 'PageTables: 9024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10461456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218052 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.517 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.518 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.519 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:03.520 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 23400560 kB' 'MemUsed: 9233876 kB' 'SwapCached: 44 kB' 'Active: 4221796 kB' 'Inactive: 535260 kB' 'Active(anon): 3444236 kB' 'Inactive(anon): 56 kB' 'Active(file): 777560 kB' 'Inactive(file): 535204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4433624 kB' 'Mapped: 100572 kB' 'AnonPages: 326532 kB' 'Shmem: 3120816 kB' 'KernelStack: 11480 kB' 'PageTables: 5364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 391264 kB' 'Slab: 876052 kB' 'SReclaimable: 391264 kB' 'SUnreclaim: 484788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.521 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.522 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.523 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.523 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.523 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.523 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.523 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.523 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:03.523 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.523 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.523 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.523 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:03.523 15:03:28 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:03.523 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:03.523 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:03.523 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:03.523 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:03.523 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:03.523 node0=1024 expecting 1024 00:04:03.523 15:03:28 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.523 00:04:03.523 real 0m5.206s 00:04:03.523 user 0m1.342s 00:04:03.523 sys 0m2.372s 00:04:03.523 15:03:28 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.523 15:03:28 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:04:03.523 ************************************ 00:04:03.523 END TEST single_node_setup 00:04:03.523 ************************************ 00:04:03.523 15:03:28 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:04:03.523 15:03:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.523 15:03:28 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.523 15:03:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.523 ************************************ 00:04:03.523 START TEST even_2G_alloc 00:04:03.523 ************************************ 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1129 -- # even_2G_alloc 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.523 15:03:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:06.817 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:06.817 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:06.817 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:06.817 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:06.817 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:06.817 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:06.817 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:06.817 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:06.817 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:06.817 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:06.817 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:06.817 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:06.817 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:06.817 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:06.817 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:06.817 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:06.817 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43246192 kB' 'MemAvailable: 44867464 kB' 'Buffers: 6784 kB' 'Cached: 9650564 kB' 'SwapCached: 76 kB' 'Active: 7081220 kB' 'Inactive: 3191692 kB' 'Active(anon): 6173892 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618820 kB' 'Mapped: 156276 kB' 'Shmem: 7909376 kB' 'KReclaimable: 570100 kB' 'Slab: 1570948 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000848 kB' 'KernelStack: 21872 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10452572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218020 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.084 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43246172 kB' 'MemAvailable: 44867444 kB' 'Buffers: 6784 kB' 'Cached: 9650580 kB' 'SwapCached: 76 kB' 'Active: 7080488 kB' 'Inactive: 3191692 kB' 'Active(anon): 6173160 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618056 kB' 'Mapped: 156232 kB' 'Shmem: 7909392 kB' 'KReclaimable: 570100 kB' 'Slab: 1570944 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000844 kB' 'KernelStack: 21840 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10452592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218004 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.085 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.086 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43246776 kB' 'MemAvailable: 44868048 kB' 'Buffers: 6784 kB' 'Cached: 9650584 kB' 'SwapCached: 76 kB' 'Active: 7080868 kB' 'Inactive: 3191692 kB' 'Active(anon): 6173540 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618476 kB' 'Mapped: 156232 kB' 'Shmem: 7909396 kB' 'KReclaimable: 570100 kB' 'Slab: 1570944 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000844 kB' 'KernelStack: 21856 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10452612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218020 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.087 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.088 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:07.089 nr_hugepages=1024 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:07.089 resv_hugepages=0 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:07.089 surplus_hugepages=0 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:07.089 anon_hugepages=0 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43246272 kB' 'MemAvailable: 44867544 kB' 'Buffers: 6784 kB' 'Cached: 9650624 kB' 'SwapCached: 76 kB' 'Active: 7080536 kB' 'Inactive: 3191692 kB' 'Active(anon): 6173208 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618056 kB' 'Mapped: 156232 kB' 'Shmem: 7909436 kB' 'KReclaimable: 570100 kB' 'Slab: 1570944 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000844 kB' 'KernelStack: 21840 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10452632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218020 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.089 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.090 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 24445192 kB' 'MemUsed: 8189244 kB' 'SwapCached: 44 kB' 'Active: 4220968 kB' 'Inactive: 535260 kB' 'Active(anon): 3443408 kB' 'Inactive(anon): 56 kB' 'Active(file): 777560 kB' 'Inactive(file): 535204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4433640 kB' 'Mapped: 99764 kB' 'AnonPages: 325796 kB' 'Shmem: 3120832 kB' 'KernelStack: 11336 kB' 'PageTables: 5400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 391264 kB' 'Slab: 876484 kB' 'SReclaimable: 391264 kB' 'SUnreclaim: 485220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.091 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.092 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649360 kB' 'MemFree: 18800324 kB' 'MemUsed: 8849036 kB' 'SwapCached: 32 kB' 'Active: 2859988 kB' 'Inactive: 2656432 kB' 'Active(anon): 2730220 kB' 'Inactive(anon): 2350992 kB' 'Active(file): 129768 kB' 'Inactive(file): 305440 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5223868 kB' 'Mapped: 56468 kB' 'AnonPages: 292680 kB' 'Shmem: 4788628 kB' 'KernelStack: 10520 kB' 'PageTables: 3228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 178836 kB' 'Slab: 694460 kB' 'SReclaimable: 178836 kB' 'SUnreclaim: 515624 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.093 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:07.094 node0=512 expecting 512 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:07.094 node1=512 expecting 512 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:04:07.094 00:04:07.094 real 0m3.613s 00:04:07.094 user 0m1.315s 00:04:07.094 sys 0m2.355s 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.094 15:03:32 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:07.094 ************************************ 00:04:07.094 END TEST even_2G_alloc 00:04:07.094 ************************************ 00:04:07.352 15:03:32 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:04:07.352 15:03:32 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.352 15:03:32 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.352 15:03:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.352 ************************************ 00:04:07.352 START TEST odd_alloc 00:04:07.352 ************************************ 00:04:07.352 15:03:32 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1129 -- # odd_alloc 00:04:07.352 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:04:07.352 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:04:07.352 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:07.352 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.353 15:03:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:10.670 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:10.670 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:10.670 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:10.670 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:10.670 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:10.670 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:10.670 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:10.670 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:10.670 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:10.670 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:10.670 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:10.670 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:10.670 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:10.670 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:10.670 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:10.670 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:10.670 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:10.670 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:04:10.670 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:04:10.670 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:10.670 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:10.670 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:10.670 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:10.670 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:10.670 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.670 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:10.670 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.670 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:10.670 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.670 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.670 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.670 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.670 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.670 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.670 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.670 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.670 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43290608 kB' 'MemAvailable: 44911880 kB' 'Buffers: 6784 kB' 'Cached: 9650732 kB' 'SwapCached: 76 kB' 'Active: 7082360 kB' 'Inactive: 3191692 kB' 'Active(anon): 6175032 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619844 kB' 'Mapped: 156268 kB' 'Shmem: 7909544 kB' 'KReclaimable: 570100 kB' 'Slab: 1570752 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000652 kB' 'KernelStack: 21872 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480900 kB' 'Committed_AS: 10453260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218036 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.671 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43293984 kB' 'MemAvailable: 44915256 kB' 'Buffers: 6784 kB' 'Cached: 9650736 kB' 'SwapCached: 76 kB' 'Active: 7082648 kB' 'Inactive: 3191692 kB' 'Active(anon): 6175320 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620172 kB' 'Mapped: 156744 kB' 'Shmem: 7909548 kB' 'KReclaimable: 570100 kB' 'Slab: 1570740 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000640 kB' 'KernelStack: 21856 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480900 kB' 'Committed_AS: 10454368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217988 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.672 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43294528 kB' 'MemAvailable: 44915800 kB' 'Buffers: 6784 kB' 'Cached: 9650736 kB' 'SwapCached: 76 kB' 'Active: 7084184 kB' 'Inactive: 3191692 kB' 'Active(anon): 6176856 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621696 kB' 'Mapped: 156744 kB' 'Shmem: 7909548 kB' 'KReclaimable: 570100 kB' 'Slab: 1570716 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000616 kB' 'KernelStack: 21856 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480900 kB' 'Committed_AS: 10455976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217972 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.675 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:04:10.676 nr_hugepages=1025 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:10.676 resv_hugepages=0 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:10.676 surplus_hugepages=0 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:10.676 anon_hugepages=0 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43288020 kB' 'MemAvailable: 44909292 kB' 'Buffers: 6784 kB' 'Cached: 9650740 kB' 'SwapCached: 76 kB' 'Active: 7088012 kB' 'Inactive: 3191692 kB' 'Active(anon): 6180684 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625504 kB' 'Mapped: 156744 kB' 'Shmem: 7909552 kB' 'KReclaimable: 570100 kB' 'Slab: 1570716 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000616 kB' 'KernelStack: 21856 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480900 kB' 'Committed_AS: 10459440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217976 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.937 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 24469092 kB' 'MemUsed: 8165344 kB' 'SwapCached: 44 kB' 'Active: 4224244 kB' 'Inactive: 535260 kB' 'Active(anon): 3446684 kB' 'Inactive(anon): 56 kB' 'Active(file): 777560 kB' 'Inactive(file): 535204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4433676 kB' 'Mapped: 100276 kB' 'AnonPages: 329220 kB' 'Shmem: 3120868 kB' 'KernelStack: 11368 kB' 'PageTables: 5596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 391264 kB' 'Slab: 876352 kB' 'SReclaimable: 391264 kB' 'SUnreclaim: 485088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.938 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649360 kB' 'MemFree: 18824708 kB' 'MemUsed: 8824652 kB' 'SwapCached: 32 kB' 'Active: 2859080 kB' 'Inactive: 2656432 kB' 'Active(anon): 2729312 kB' 'Inactive(anon): 2350992 kB' 'Active(file): 129768 kB' 'Inactive(file): 305440 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5223996 kB' 'Mapped: 56392 kB' 'AnonPages: 291048 kB' 'Shmem: 4788756 kB' 'KernelStack: 10488 kB' 'PageTables: 3124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 178836 kB' 'Slab: 694364 kB' 'SReclaimable: 178836 kB' 'SUnreclaim: 515528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.939 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:04:10.940 node0=513 expecting 513 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:10.940 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:10.941 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:10.941 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:10.941 node1=512 expecting 512 00:04:10.941 15:03:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:10.941 00:04:10.941 real 0m3.613s 00:04:10.941 user 0m1.349s 00:04:10.941 sys 0m2.318s 00:04:10.941 15:03:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.941 15:03:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:10.941 ************************************ 00:04:10.941 END TEST odd_alloc 00:04:10.941 ************************************ 00:04:10.941 15:03:36 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:04:10.941 15:03:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.941 15:03:36 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.941 15:03:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:10.941 ************************************ 00:04:10.941 START TEST custom_alloc 00:04:10.941 ************************************ 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1129 -- # custom_alloc 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.941 15:03:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:14.221 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:14.221 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:14.221 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:14.221 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:14.221 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:14.221 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:14.221 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:14.221 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:14.221 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:14.221 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:14.221 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:14.221 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:14.221 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:14.221 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:14.221 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:14.221 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:14.221 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:14.484 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 42241744 kB' 'MemAvailable: 43863016 kB' 'Buffers: 6784 kB' 'Cached: 9650904 kB' 'SwapCached: 76 kB' 'Active: 7083660 kB' 'Inactive: 3191692 kB' 'Active(anon): 6176332 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620840 kB' 'Mapped: 156312 kB' 'Shmem: 7909716 kB' 'KReclaimable: 570100 kB' 'Slab: 1570608 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000508 kB' 'KernelStack: 21952 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957636 kB' 'Committed_AS: 10456592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218260 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.485 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 42242712 kB' 'MemAvailable: 43863984 kB' 'Buffers: 6784 kB' 'Cached: 9650908 kB' 'SwapCached: 76 kB' 'Active: 7083128 kB' 'Inactive: 3191692 kB' 'Active(anon): 6175800 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620288 kB' 'Mapped: 156260 kB' 'Shmem: 7909720 kB' 'KReclaimable: 570100 kB' 'Slab: 1570604 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000504 kB' 'KernelStack: 21888 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957636 kB' 'Committed_AS: 10453972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218068 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.486 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.487 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 42241964 kB' 'MemAvailable: 43863236 kB' 'Buffers: 6784 kB' 'Cached: 9650928 kB' 'SwapCached: 76 kB' 'Active: 7083096 kB' 'Inactive: 3191692 kB' 'Active(anon): 6175768 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620248 kB' 'Mapped: 156264 kB' 'Shmem: 7909740 kB' 'KReclaimable: 570100 kB' 'Slab: 1570744 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000644 kB' 'KernelStack: 21856 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957636 kB' 'Committed_AS: 10453996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218068 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:04:14.490 nr_hugepages=1536 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:14.490 resv_hugepages=0 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:14.490 surplus_hugepages=0 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:14.490 anon_hugepages=0 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 42242924 kB' 'MemAvailable: 43864196 kB' 'Buffers: 6784 kB' 'Cached: 9650968 kB' 'SwapCached: 76 kB' 'Active: 7082772 kB' 'Inactive: 3191692 kB' 'Active(anon): 6175444 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619848 kB' 'Mapped: 156264 kB' 'Shmem: 7909780 kB' 'KReclaimable: 570100 kB' 'Slab: 1570744 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000644 kB' 'KernelStack: 21840 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957636 kB' 'Committed_AS: 10454016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218068 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.490 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 24462036 kB' 'MemUsed: 8172400 kB' 'SwapCached: 44 kB' 'Active: 4221984 kB' 'Inactive: 535260 kB' 'Active(anon): 3444424 kB' 'Inactive(anon): 56 kB' 'Active(file): 777560 kB' 'Inactive(file): 535204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4433684 kB' 'Mapped: 99796 kB' 'AnonPages: 326736 kB' 'Shmem: 3120876 kB' 'KernelStack: 11336 kB' 'PageTables: 5356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 391264 kB' 'Slab: 876468 kB' 'SReclaimable: 391264 kB' 'SUnreclaim: 485204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:14.493 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649360 kB' 'MemFree: 17781324 kB' 'MemUsed: 9868036 kB' 'SwapCached: 32 kB' 'Active: 2861524 kB' 'Inactive: 2656432 kB' 'Active(anon): 2731756 kB' 'Inactive(anon): 2350992 kB' 'Active(file): 129768 kB' 'Inactive(file): 305440 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5224180 kB' 'Mapped: 56468 kB' 'AnonPages: 293848 kB' 'Shmem: 4788940 kB' 'KernelStack: 10536 kB' 'PageTables: 3332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 178836 kB' 'Slab: 694276 kB' 'SReclaimable: 178836 kB' 'SUnreclaim: 515440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.752 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.753 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:14.754 node0=512 expecting 512 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:04:14.754 node1=1024 expecting 1024 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:14.754 00:04:14.754 real 0m3.700s 00:04:14.754 user 0m1.433s 00:04:14.754 sys 0m2.336s 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.754 15:03:39 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:14.754 ************************************ 00:04:14.754 END TEST custom_alloc 00:04:14.754 ************************************ 00:04:14.754 15:03:39 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:14.754 15:03:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.754 15:03:39 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.754 15:03:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.754 ************************************ 00:04:14.754 START TEST no_shrink_alloc 00:04:14.754 ************************************ 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1129 -- # no_shrink_alloc 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:04:14.754 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.755 15:03:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:18.031 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:18.031 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:18.031 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:18.031 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:18.031 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:18.031 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:18.031 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:18.031 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:18.031 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:18.031 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:18.031 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:18.031 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:18.031 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:18.031 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:18.031 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:18.031 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:18.031 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:18.031 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:04:18.031 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:18.031 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:18.031 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:18.031 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:18.031 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:18.031 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:18.031 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43286792 kB' 'MemAvailable: 44908064 kB' 'Buffers: 6784 kB' 'Cached: 9651092 kB' 'SwapCached: 76 kB' 'Active: 7082860 kB' 'Inactive: 3191692 kB' 'Active(anon): 6175532 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619352 kB' 'Mapped: 156288 kB' 'Shmem: 7909904 kB' 'KReclaimable: 570100 kB' 'Slab: 1570660 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000560 kB' 'KernelStack: 21792 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10454300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218068 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.294 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.295 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43287232 kB' 'MemAvailable: 44908504 kB' 'Buffers: 6784 kB' 'Cached: 9651096 kB' 'SwapCached: 76 kB' 'Active: 7081944 kB' 'Inactive: 3191692 kB' 'Active(anon): 6174616 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618952 kB' 'Mapped: 156276 kB' 'Shmem: 7909908 kB' 'KReclaimable: 570100 kB' 'Slab: 1570768 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000668 kB' 'KernelStack: 21824 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10454448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218036 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.296 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.297 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43287472 kB' 'MemAvailable: 44908744 kB' 'Buffers: 6784 kB' 'Cached: 9651116 kB' 'SwapCached: 76 kB' 'Active: 7081948 kB' 'Inactive: 3191692 kB' 'Active(anon): 6174620 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618984 kB' 'Mapped: 156276 kB' 'Shmem: 7909928 kB' 'KReclaimable: 570100 kB' 'Slab: 1570768 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000668 kB' 'KernelStack: 21824 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10454476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218036 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.298 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.299 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:18.300 nr_hugepages=1024 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:18.300 resv_hugepages=0 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:18.300 surplus_hugepages=0 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:18.300 anon_hugepages=0 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43288496 kB' 'MemAvailable: 44909768 kB' 'Buffers: 6784 kB' 'Cached: 9651148 kB' 'SwapCached: 76 kB' 'Active: 7082588 kB' 'Inactive: 3191692 kB' 'Active(anon): 6175260 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619564 kB' 'Mapped: 156276 kB' 'Shmem: 7909960 kB' 'KReclaimable: 570100 kB' 'Slab: 1570768 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000668 kB' 'KernelStack: 21872 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10454864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218052 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.300 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.301 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 23419536 kB' 'MemUsed: 9214900 kB' 'SwapCached: 44 kB' 'Active: 4221304 kB' 'Inactive: 535260 kB' 'Active(anon): 3443744 kB' 'Inactive(anon): 56 kB' 'Active(file): 777560 kB' 'Inactive(file): 535204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4433772 kB' 'Mapped: 99808 kB' 'AnonPages: 325992 kB' 'Shmem: 3120964 kB' 'KernelStack: 11352 kB' 'PageTables: 5408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 391264 kB' 'Slab: 876548 kB' 'SReclaimable: 391264 kB' 'SUnreclaim: 485284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.302 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:18.303 node0=1024 expecting 1024 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:04:18.303 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.304 15:03:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:21.589 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:21.589 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:21.589 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:21.589 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:21.589 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:21.589 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:21.589 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:21.589 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:21.589 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:21.589 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:21.589 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:21.589 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:21.589 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:21.589 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:21.589 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:21.589 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:21.589 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:21.589 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:21.589 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:04:21.589 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:21.589 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:21.589 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:21.589 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:21.589 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:21.589 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:21.589 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.589 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:21.589 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.589 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.589 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.589 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.589 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.589 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.589 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.589 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.589 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.589 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.589 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43272224 kB' 'MemAvailable: 44893496 kB' 'Buffers: 6784 kB' 'Cached: 9651244 kB' 'SwapCached: 76 kB' 'Active: 7083752 kB' 'Inactive: 3191692 kB' 'Active(anon): 6176424 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620680 kB' 'Mapped: 156316 kB' 'Shmem: 7910056 kB' 'KReclaimable: 570100 kB' 'Slab: 1570488 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000388 kB' 'KernelStack: 21872 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10455148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218148 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.590 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43272400 kB' 'MemAvailable: 44893672 kB' 'Buffers: 6784 kB' 'Cached: 9651252 kB' 'SwapCached: 76 kB' 'Active: 7083996 kB' 'Inactive: 3191692 kB' 'Active(anon): 6176668 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620904 kB' 'Mapped: 156272 kB' 'Shmem: 7910064 kB' 'KReclaimable: 570100 kB' 'Slab: 1570488 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000388 kB' 'KernelStack: 21856 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10455168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218132 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.591 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.592 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43270376 kB' 'MemAvailable: 44891648 kB' 'Buffers: 6784 kB' 'Cached: 9651264 kB' 'SwapCached: 76 kB' 'Active: 7084680 kB' 'Inactive: 3191692 kB' 'Active(anon): 6177352 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621628 kB' 'Mapped: 156268 kB' 'Shmem: 7910076 kB' 'KReclaimable: 570100 kB' 'Slab: 1570584 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000484 kB' 'KernelStack: 21888 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10469824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218148 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.593 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.594 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:21.595 nr_hugepages=1024 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:21.595 resv_hugepages=0 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:21.595 surplus_hugepages=0 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:21.595 anon_hugepages=0 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43274084 kB' 'MemAvailable: 44895356 kB' 'Buffers: 6784 kB' 'Cached: 9651288 kB' 'SwapCached: 76 kB' 'Active: 7083640 kB' 'Inactive: 3191692 kB' 'Active(anon): 6176312 kB' 'Inactive(anon): 2351048 kB' 'Active(file): 907328 kB' 'Inactive(file): 840644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 7698940 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620560 kB' 'Mapped: 156268 kB' 'Shmem: 7910100 kB' 'KReclaimable: 570100 kB' 'Slab: 1570584 kB' 'SReclaimable: 570100 kB' 'SUnreclaim: 1000484 kB' 'KernelStack: 21840 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 10455980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218052 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3210612 kB' 'DirectMap2M: 37369856 kB' 'DirectMap1G: 29360128 kB' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.595 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.596 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 23392744 kB' 'MemUsed: 9241692 kB' 'SwapCached: 44 kB' 'Active: 4223200 kB' 'Inactive: 535260 kB' 'Active(anon): 3445640 kB' 'Inactive(anon): 56 kB' 'Active(file): 777560 kB' 'Inactive(file): 535204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4433928 kB' 'Mapped: 99800 kB' 'AnonPages: 327796 kB' 'Shmem: 3121120 kB' 'KernelStack: 11320 kB' 'PageTables: 5316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 391264 kB' 'Slab: 876520 kB' 'SReclaimable: 391264 kB' 'SUnreclaim: 485256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.597 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:21.598 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:21.599 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:21.599 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:21.599 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:21.599 node0=1024 expecting 1024 00:04:21.599 15:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:21.599 00:04:21.599 real 0m6.920s 00:04:21.599 user 0m2.531s 00:04:21.599 sys 0m4.450s 00:04:21.599 15:03:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.599 15:03:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:21.599 ************************************ 00:04:21.599 END TEST no_shrink_alloc 00:04:21.599 ************************************ 00:04:21.599 15:03:46 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:04:21.599 15:03:46 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:21.599 15:03:46 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:21.599 15:03:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.599 15:03:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:21.599 15:03:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.599 15:03:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:21.599 15:03:46 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:21.599 15:03:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.599 15:03:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:21.599 15:03:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.599 15:03:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:21.599 15:03:46 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:21.599 15:03:46 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:21.599 00:04:21.599 real 0m23.711s 00:04:21.599 user 0m8.285s 00:04:21.599 sys 0m14.225s 00:04:21.599 15:03:46 setup.sh.hugepages -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.599 15:03:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:21.599 ************************************ 00:04:21.599 END TEST hugepages 00:04:21.599 ************************************ 00:04:21.856 15:03:46 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:21.856 15:03:46 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.856 15:03:46 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.856 15:03:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:21.856 ************************************ 00:04:21.856 START TEST driver 00:04:21.856 ************************************ 00:04:21.856 15:03:46 setup.sh.driver -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:21.856 * Looking for test storage... 00:04:21.856 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:21.857 15:03:47 setup.sh.driver -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.857 15:03:47 setup.sh.driver -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.857 15:03:47 setup.sh.driver -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:21.857 15:03:47 setup.sh.driver -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.857 15:03:47 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:04:21.857 15:03:47 setup.sh.driver -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.857 15:03:47 setup.sh.driver -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:21.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.857 --rc genhtml_branch_coverage=1 00:04:21.857 --rc genhtml_function_coverage=1 00:04:21.857 --rc genhtml_legend=1 00:04:21.857 --rc geninfo_all_blocks=1 00:04:21.857 --rc geninfo_unexecuted_blocks=1 00:04:21.857 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:21.857 ' 00:04:21.857 15:03:47 setup.sh.driver -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:21.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.857 --rc genhtml_branch_coverage=1 00:04:21.857 --rc genhtml_function_coverage=1 00:04:21.857 --rc genhtml_legend=1 00:04:21.857 --rc geninfo_all_blocks=1 00:04:21.857 --rc geninfo_unexecuted_blocks=1 00:04:21.857 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:21.857 ' 00:04:21.857 15:03:47 setup.sh.driver -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:21.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.857 --rc genhtml_branch_coverage=1 00:04:21.857 --rc genhtml_function_coverage=1 00:04:21.857 --rc genhtml_legend=1 00:04:21.857 --rc geninfo_all_blocks=1 00:04:21.857 --rc geninfo_unexecuted_blocks=1 00:04:21.857 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:21.857 ' 00:04:21.857 15:03:47 setup.sh.driver -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:21.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.857 --rc genhtml_branch_coverage=1 00:04:21.857 --rc genhtml_function_coverage=1 00:04:21.857 --rc genhtml_legend=1 00:04:21.857 --rc geninfo_all_blocks=1 00:04:21.857 --rc geninfo_unexecuted_blocks=1 00:04:21.857 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:21.857 ' 00:04:21.857 15:03:47 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:21.857 15:03:47 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.857 15:03:47 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.145 15:03:51 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:26.145 15:03:51 setup.sh.driver -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.145 15:03:51 setup.sh.driver -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.145 15:03:51 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:26.145 ************************************ 00:04:26.145 START TEST guess_driver 00:04:26.145 ************************************ 00:04:26.145 15:03:51 setup.sh.driver.guess_driver -- common/autotest_common.sh@1129 -- # guess_driver 00:04:26.145 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:26.145 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:26.145 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:26.145 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:26.437 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:26.437 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:26.437 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:26.437 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:26.437 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:26.437 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:26.437 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:26.437 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:26.437 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:26.437 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:26.437 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:26.437 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:26.437 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:26.437 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:26.437 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:26.437 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:26.437 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:26.437 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:26.437 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:26.437 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:26.437 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:26.437 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:26.437 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:26.437 Looking for driver=vfio-pci 00:04:26.437 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.437 15:03:51 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:26.437 15:03:51 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.437 15:03:51 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.720 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.721 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.721 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.721 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.721 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.721 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.721 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.721 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.721 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.721 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.721 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.721 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.721 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.721 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.721 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.721 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.721 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.721 15:03:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.623 15:03:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.623 15:03:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.623 15:03:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.623 15:03:56 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:31.623 15:03:56 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:31.623 15:03:56 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:31.623 15:03:56 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.886 00:04:36.886 real 0m9.719s 00:04:36.886 user 0m2.564s 00:04:36.886 sys 0m4.900s 00:04:36.886 15:04:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.886 15:04:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:36.886 ************************************ 00:04:36.886 END TEST guess_driver 00:04:36.886 ************************************ 00:04:36.886 00:04:36.886 real 0m14.247s 00:04:36.886 user 0m3.794s 00:04:36.886 sys 0m7.353s 00:04:36.886 15:04:01 setup.sh.driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.886 15:04:01 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:36.886 ************************************ 00:04:36.886 END TEST driver 00:04:36.886 ************************************ 00:04:36.886 15:04:01 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:36.886 15:04:01 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.886 15:04:01 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.886 15:04:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:36.886 ************************************ 00:04:36.886 START TEST devices 00:04:36.886 ************************************ 00:04:36.886 15:04:01 setup.sh.devices -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:36.886 * Looking for test storage... 00:04:36.886 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:36.886 15:04:01 setup.sh.devices -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:36.886 15:04:01 setup.sh.devices -- common/autotest_common.sh@1693 -- # lcov --version 00:04:36.886 15:04:01 setup.sh.devices -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:36.886 15:04:01 setup.sh.devices -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:04:36.886 15:04:01 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.887 15:04:01 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.887 15:04:01 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.887 15:04:01 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:04:36.887 15:04:01 setup.sh.devices -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.887 15:04:01 setup.sh.devices -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:36.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.887 --rc genhtml_branch_coverage=1 00:04:36.887 --rc genhtml_function_coverage=1 00:04:36.887 --rc genhtml_legend=1 00:04:36.887 --rc geninfo_all_blocks=1 00:04:36.887 --rc geninfo_unexecuted_blocks=1 00:04:36.887 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:36.887 ' 00:04:36.887 15:04:01 setup.sh.devices -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:36.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.887 --rc genhtml_branch_coverage=1 00:04:36.887 --rc genhtml_function_coverage=1 00:04:36.887 --rc genhtml_legend=1 00:04:36.887 --rc geninfo_all_blocks=1 00:04:36.887 --rc geninfo_unexecuted_blocks=1 00:04:36.887 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:36.887 ' 00:04:36.887 15:04:01 setup.sh.devices -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:36.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.887 --rc genhtml_branch_coverage=1 00:04:36.887 --rc genhtml_function_coverage=1 00:04:36.887 --rc genhtml_legend=1 00:04:36.887 --rc geninfo_all_blocks=1 00:04:36.887 --rc geninfo_unexecuted_blocks=1 00:04:36.887 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:36.887 ' 00:04:36.887 15:04:01 setup.sh.devices -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:36.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.887 --rc genhtml_branch_coverage=1 00:04:36.887 --rc genhtml_function_coverage=1 00:04:36.887 --rc genhtml_legend=1 00:04:36.887 --rc geninfo_all_blocks=1 00:04:36.887 --rc geninfo_unexecuted_blocks=1 00:04:36.887 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:36.887 ' 00:04:36.887 15:04:01 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:36.887 15:04:01 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:36.887 15:04:01 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:36.887 15:04:01 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:40.173 15:04:05 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:40.173 15:04:05 setup.sh.devices -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:40.173 15:04:05 setup.sh.devices -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:40.173 15:04:05 setup.sh.devices -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:40.173 15:04:05 setup.sh.devices -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:40.173 15:04:05 setup.sh.devices -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:40.173 15:04:05 setup.sh.devices -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:40.173 15:04:05 setup.sh.devices -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:40.173 15:04:05 setup.sh.devices -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:40.173 15:04:05 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:40.173 15:04:05 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:40.173 15:04:05 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:40.173 15:04:05 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:40.173 15:04:05 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:40.173 15:04:05 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:40.173 15:04:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:40.173 15:04:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:40.173 15:04:05 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:40.173 15:04:05 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:40.173 15:04:05 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:40.173 15:04:05 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:04:40.173 15:04:05 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:40.173 No valid GPT data, bailing 00:04:40.173 15:04:05 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:40.173 15:04:05 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:04:40.173 15:04:05 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:04:40.173 15:04:05 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:40.173 15:04:05 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:40.173 15:04:05 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:40.173 15:04:05 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:40.173 15:04:05 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:40.173 15:04:05 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:40.173 15:04:05 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:40.173 15:04:05 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:40.173 15:04:05 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:40.173 15:04:05 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:40.173 15:04:05 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.173 15:04:05 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.173 15:04:05 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:40.173 ************************************ 00:04:40.173 START TEST nvme_mount 00:04:40.173 ************************************ 00:04:40.173 15:04:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1129 -- # nvme_mount 00:04:40.173 15:04:05 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:40.173 15:04:05 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:40.173 15:04:05 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.173 15:04:05 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:40.173 15:04:05 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:40.173 15:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:40.173 15:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:40.173 15:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:40.173 15:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:40.173 15:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:40.173 15:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:40.173 15:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:40.173 15:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.173 15:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:40.173 15:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:40.173 15:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.173 15:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:40.173 15:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:40.173 15:04:05 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:41.109 Creating new GPT entries in memory. 00:04:41.110 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:41.110 other utilities. 00:04:41.110 15:04:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:41.110 15:04:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.110 15:04:06 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.110 15:04:06 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.110 15:04:06 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:42.044 Creating new GPT entries in memory. 00:04:42.044 The operation has completed successfully. 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2334947 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.044 15:04:07 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:45.326 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.326 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.326 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.326 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.326 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.326 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.326 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.326 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.326 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.326 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.326 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.326 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.326 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.326 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.326 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.326 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.326 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.326 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:45.327 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.585 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:45.585 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:45.585 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:45.585 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.585 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.585 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:45.585 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:45.585 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:45.585 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:45.585 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:45.844 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:45.844 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:45.844 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:45.844 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:45.844 15:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:45.844 15:04:10 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:45.844 15:04:10 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.844 15:04:10 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:45.844 15:04:10 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:45.844 15:04:11 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.844 15:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:45.844 15:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:45.844 15:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:45.844 15:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.844 15:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:45.844 15:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:45.844 15:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:45.844 15:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:45.844 15:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:45.844 15:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.844 15:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:45.844 15:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:45.844 15:04:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.844 15:04:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:49.122 15:04:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.122 15:04:14 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.122 15:04:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:49.122 15:04:14 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.122 15:04:14 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:49.122 15:04:14 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.122 15:04:14 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.122 15:04:14 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:49.122 15:04:14 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:49.122 15:04:14 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:49.122 15:04:14 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:49.122 15:04:14 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:49.122 15:04:14 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:49.122 15:04:14 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:49.122 15:04:14 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:49.122 15:04:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.122 15:04:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:49.122 15:04:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:49.122 15:04:14 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.122 15:04:14 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:52.400 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.400 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.400 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.400 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.400 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.400 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.400 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.400 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.400 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.400 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.400 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.400 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.400 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.400 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.400 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:52.401 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:52.401 00:04:52.401 real 0m12.366s 00:04:52.401 user 0m3.372s 00:04:52.401 sys 0m6.849s 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.401 15:04:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:52.401 ************************************ 00:04:52.401 END TEST nvme_mount 00:04:52.401 ************************************ 00:04:52.401 15:04:17 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:52.401 15:04:17 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.401 15:04:17 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.401 15:04:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:52.401 ************************************ 00:04:52.401 START TEST dm_mount 00:04:52.401 ************************************ 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- common/autotest_common.sh@1129 -- # dm_mount 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:52.401 15:04:17 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:53.338 Creating new GPT entries in memory. 00:04:53.338 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:53.338 other utilities. 00:04:53.338 15:04:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:53.338 15:04:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.338 15:04:18 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:53.338 15:04:18 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:53.338 15:04:18 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:54.712 Creating new GPT entries in memory. 00:04:54.712 The operation has completed successfully. 00:04:54.712 15:04:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:54.712 15:04:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:54.713 15:04:19 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:54.713 15:04:19 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:54.713 15:04:19 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:55.647 The operation has completed successfully. 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2339382 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.647 15:04:20 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:58.926 15:04:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.926 15:04:24 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:58.926 15:04:24 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:58.926 15:04:24 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:58.926 15:04:24 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:58.926 15:04:24 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.926 15:04:24 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:58.926 15:04:24 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:58.926 15:04:24 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:58.926 15:04:24 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:58.926 15:04:24 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:58.926 15:04:24 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:58.926 15:04:24 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:58.926 15:04:24 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:58.926 15:04:24 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:58.926 15:04:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.926 15:04:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:58.926 15:04:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:58.926 15:04:24 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.926 15:04:24 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.453 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.454 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.454 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.454 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.454 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.454 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.454 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.454 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.716 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.716 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:01.716 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:01.716 15:04:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.716 15:04:27 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.716 15:04:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:01.716 15:04:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:01.716 15:04:27 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:01.716 15:04:27 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:01.716 15:04:27 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.716 15:04:27 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:01.974 15:04:27 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.974 15:04:27 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:01.974 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:01.974 15:04:27 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:01.974 15:04:27 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:01.974 00:05:01.974 real 0m9.494s 00:05:01.974 user 0m2.225s 00:05:01.974 sys 0m4.322s 00:05:01.974 15:04:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.974 15:04:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:01.974 ************************************ 00:05:01.974 END TEST dm_mount 00:05:01.974 ************************************ 00:05:01.974 15:04:27 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:01.975 15:04:27 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:01.975 15:04:27 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.975 15:04:27 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.975 15:04:27 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:01.975 15:04:27 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.975 15:04:27 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:02.233 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:02.233 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:05:02.233 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:02.233 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:02.233 15:04:27 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:02.233 15:04:27 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:02.233 15:04:27 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:02.233 15:04:27 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:02.233 15:04:27 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:02.233 15:04:27 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:02.233 15:04:27 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:02.233 00:05:02.233 real 0m26.122s 00:05:02.233 user 0m7.054s 00:05:02.233 sys 0m13.871s 00:05:02.233 15:04:27 setup.sh.devices -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.233 15:04:27 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:02.233 ************************************ 00:05:02.233 END TEST devices 00:05:02.233 ************************************ 00:05:02.233 00:05:02.233 real 1m28.644s 00:05:02.233 user 0m26.922s 00:05:02.233 sys 0m50.446s 00:05:02.233 15:04:27 setup.sh -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.233 15:04:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:02.233 ************************************ 00:05:02.233 END TEST setup.sh 00:05:02.233 ************************************ 00:05:02.233 15:04:27 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:05.520 Hugepages 00:05:05.520 node hugesize free / total 00:05:05.520 node0 1048576kB 0 / 0 00:05:05.520 node0 2048kB 1024 / 1024 00:05:05.520 node1 1048576kB 0 / 0 00:05:05.520 node1 2048kB 1024 / 1024 00:05:05.520 00:05:05.520 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:05.520 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:05.520 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:05.520 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:05.520 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:05.520 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:05.520 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:05.520 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:05.520 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:05.520 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:05.520 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:05.520 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:05.520 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:05.520 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:05.520 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:05.520 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:05.520 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:05.778 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:05.778 15:04:30 -- spdk/autotest.sh@117 -- # uname -s 00:05:05.778 15:04:30 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:05.778 15:04:30 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:05.778 15:04:30 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:09.066 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:09.066 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:09.066 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:09.066 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:09.066 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:09.066 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:09.066 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:09.066 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:09.066 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:09.066 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:09.066 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:09.066 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:09.067 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:09.067 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:09.067 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:09.067 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:10.969 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:10.969 15:04:35 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:11.904 15:04:36 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:11.904 15:04:36 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:11.904 15:04:36 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:11.904 15:04:36 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:11.904 15:04:36 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:11.904 15:04:36 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:11.904 15:04:36 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:11.904 15:04:36 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:11.904 15:04:36 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:11.904 15:04:37 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:11.904 15:04:37 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:05:11.904 15:04:37 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:15.191 Waiting for block devices as requested 00:05:15.191 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:15.191 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:15.191 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:15.191 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:15.191 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:15.191 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:15.191 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:15.191 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:15.191 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:15.451 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:15.451 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:15.451 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:15.708 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:15.708 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:15.708 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:15.966 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:15.966 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:16.223 15:04:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:16.223 15:04:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:16.223 15:04:41 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:16.223 15:04:41 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:05:16.223 15:04:41 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:16.223 15:04:41 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:16.223 15:04:41 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:16.223 15:04:41 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:16.223 15:04:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:16.223 15:04:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:16.223 15:04:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:16.223 15:04:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:16.223 15:04:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:16.223 15:04:41 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:05:16.223 15:04:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:16.223 15:04:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:16.223 15:04:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:16.223 15:04:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:16.223 15:04:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:16.223 15:04:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:16.223 15:04:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:16.223 15:04:41 -- common/autotest_common.sh@1543 -- # continue 00:05:16.223 15:04:41 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:16.223 15:04:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.223 15:04:41 -- common/autotest_common.sh@10 -- # set +x 00:05:16.223 15:04:41 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:16.223 15:04:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.223 15:04:41 -- common/autotest_common.sh@10 -- # set +x 00:05:16.223 15:04:41 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:19.504 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:19.504 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:19.504 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:19.504 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:19.504 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:19.504 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:19.504 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:19.504 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:19.504 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:19.504 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:19.504 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:19.504 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:19.504 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:19.504 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:19.504 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:19.762 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:21.141 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:21.141 15:04:46 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:21.141 15:04:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:21.141 15:04:46 -- common/autotest_common.sh@10 -- # set +x 00:05:21.400 15:04:46 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:21.400 15:04:46 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:21.400 15:04:46 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:21.400 15:04:46 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:21.400 15:04:46 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:21.400 15:04:46 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:21.400 15:04:46 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:21.400 15:04:46 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:21.400 15:04:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:21.400 15:04:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:21.400 15:04:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:21.400 15:04:46 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:21.400 15:04:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:21.400 15:04:46 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:21.400 15:04:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:05:21.400 15:04:46 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:21.400 15:04:46 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:21.400 15:04:46 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:21.400 15:04:46 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:21.400 15:04:46 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:21.400 15:04:46 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:21.400 15:04:46 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:05:21.400 15:04:46 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:05:21.400 15:04:46 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2348926 00:05:21.400 15:04:46 -- common/autotest_common.sh@1585 -- # waitforlisten 2348926 00:05:21.400 15:04:46 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.400 15:04:46 -- common/autotest_common.sh@835 -- # '[' -z 2348926 ']' 00:05:21.400 15:04:46 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.400 15:04:46 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.400 15:04:46 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.400 15:04:46 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.400 15:04:46 -- common/autotest_common.sh@10 -- # set +x 00:05:21.400 [2024-11-27 15:04:46.649375] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:05:21.400 [2024-11-27 15:04:46.649437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2348926 ] 00:05:21.400 [2024-11-27 15:04:46.719545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.658 [2024-11-27 15:04:46.760736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.658 15:04:46 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.658 15:04:46 -- common/autotest_common.sh@868 -- # return 0 00:05:21.658 15:04:46 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:21.658 15:04:46 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:21.658 15:04:46 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:24.981 nvme0n1 00:05:24.981 15:04:49 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:24.981 [2024-11-27 15:04:50.166868] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:24.981 request: 00:05:24.981 { 00:05:24.981 "nvme_ctrlr_name": "nvme0", 00:05:24.981 "password": "test", 00:05:24.981 "method": "bdev_nvme_opal_revert", 00:05:24.981 "req_id": 1 00:05:24.981 } 00:05:24.981 Got JSON-RPC error response 00:05:24.981 response: 00:05:24.981 { 00:05:24.981 "code": -32602, 00:05:24.981 "message": "Invalid parameters" 00:05:24.981 } 00:05:24.981 15:04:50 -- common/autotest_common.sh@1591 -- # true 00:05:24.981 15:04:50 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:24.981 15:04:50 -- common/autotest_common.sh@1595 -- # killprocess 2348926 00:05:24.981 15:04:50 -- common/autotest_common.sh@954 -- # '[' -z 2348926 ']' 00:05:24.981 15:04:50 -- common/autotest_common.sh@958 -- # kill -0 2348926 00:05:24.981 15:04:50 -- common/autotest_common.sh@959 -- # uname 00:05:24.981 15:04:50 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.981 15:04:50 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2348926 00:05:24.981 15:04:50 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.981 15:04:50 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.981 15:04:50 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2348926' 00:05:24.981 killing process with pid 2348926 00:05:24.981 15:04:50 -- common/autotest_common.sh@973 -- # kill 2348926 00:05:24.981 15:04:50 -- common/autotest_common.sh@978 -- # wait 2348926 00:05:27.515 15:04:52 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:27.515 15:04:52 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:27.515 15:04:52 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:27.515 15:04:52 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:27.515 15:04:52 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:27.515 15:04:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:27.515 15:04:52 -- common/autotest_common.sh@10 -- # set +x 00:05:27.515 15:04:52 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:27.515 15:04:52 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:27.515 15:04:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.515 15:04:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.515 15:04:52 -- common/autotest_common.sh@10 -- # set +x 00:05:27.515 ************************************ 00:05:27.515 START TEST env 00:05:27.515 ************************************ 00:05:27.515 15:04:52 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:27.515 * Looking for test storage... 00:05:27.515 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:27.515 15:04:52 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:27.515 15:04:52 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:27.515 15:04:52 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:27.515 15:04:52 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:27.515 15:04:52 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.515 15:04:52 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.515 15:04:52 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.515 15:04:52 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.515 15:04:52 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.515 15:04:52 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.515 15:04:52 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.515 15:04:52 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.515 15:04:52 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.515 15:04:52 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.515 15:04:52 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.515 15:04:52 env -- scripts/common.sh@344 -- # case "$op" in 00:05:27.515 15:04:52 env -- scripts/common.sh@345 -- # : 1 00:05:27.515 15:04:52 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.515 15:04:52 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.515 15:04:52 env -- scripts/common.sh@365 -- # decimal 1 00:05:27.515 15:04:52 env -- scripts/common.sh@353 -- # local d=1 00:05:27.515 15:04:52 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.515 15:04:52 env -- scripts/common.sh@355 -- # echo 1 00:05:27.515 15:04:52 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.515 15:04:52 env -- scripts/common.sh@366 -- # decimal 2 00:05:27.515 15:04:52 env -- scripts/common.sh@353 -- # local d=2 00:05:27.515 15:04:52 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.515 15:04:52 env -- scripts/common.sh@355 -- # echo 2 00:05:27.515 15:04:52 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.515 15:04:52 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.515 15:04:52 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.515 15:04:52 env -- scripts/common.sh@368 -- # return 0 00:05:27.515 15:04:52 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.515 15:04:52 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:27.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.515 --rc genhtml_branch_coverage=1 00:05:27.515 --rc genhtml_function_coverage=1 00:05:27.515 --rc genhtml_legend=1 00:05:27.515 --rc geninfo_all_blocks=1 00:05:27.515 --rc geninfo_unexecuted_blocks=1 00:05:27.515 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:27.515 ' 00:05:27.515 15:04:52 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:27.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.515 --rc genhtml_branch_coverage=1 00:05:27.515 --rc genhtml_function_coverage=1 00:05:27.515 --rc genhtml_legend=1 00:05:27.515 --rc geninfo_all_blocks=1 00:05:27.515 --rc geninfo_unexecuted_blocks=1 00:05:27.515 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:27.515 ' 00:05:27.515 15:04:52 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:27.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.515 --rc genhtml_branch_coverage=1 00:05:27.515 --rc genhtml_function_coverage=1 00:05:27.515 --rc genhtml_legend=1 00:05:27.515 --rc geninfo_all_blocks=1 00:05:27.515 --rc geninfo_unexecuted_blocks=1 00:05:27.515 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:27.515 ' 00:05:27.515 15:04:52 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:27.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.515 --rc genhtml_branch_coverage=1 00:05:27.515 --rc genhtml_function_coverage=1 00:05:27.515 --rc genhtml_legend=1 00:05:27.515 --rc geninfo_all_blocks=1 00:05:27.515 --rc geninfo_unexecuted_blocks=1 00:05:27.515 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:27.515 ' 00:05:27.515 15:04:52 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:27.515 15:04:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.515 15:04:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.515 15:04:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.515 ************************************ 00:05:27.515 START TEST env_memory 00:05:27.515 ************************************ 00:05:27.515 15:04:52 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:27.515 00:05:27.515 00:05:27.515 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.515 http://cunit.sourceforge.net/ 00:05:27.516 00:05:27.516 00:05:27.516 Suite: memory 00:05:27.516 Test: alloc and free memory map ...[2024-11-27 15:04:52.655477] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:27.516 passed 00:05:27.516 Test: mem map translation ...[2024-11-27 15:04:52.667976] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:27.516 [2024-11-27 15:04:52.667992] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:27.516 [2024-11-27 15:04:52.668022] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:27.516 [2024-11-27 15:04:52.668030] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:27.516 passed 00:05:27.516 Test: mem map registration ...[2024-11-27 15:04:52.688073] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:27.516 [2024-11-27 15:04:52.688090] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:27.516 passed 00:05:27.516 Test: mem map adjacent registrations ...passed 00:05:27.516 00:05:27.516 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.516 suites 1 1 n/a 0 0 00:05:27.516 tests 4 4 4 0 0 00:05:27.516 asserts 152 152 152 0 n/a 00:05:27.516 00:05:27.516 Elapsed time = 0.083 seconds 00:05:27.516 00:05:27.516 real 0m0.095s 00:05:27.516 user 0m0.080s 00:05:27.516 sys 0m0.015s 00:05:27.516 15:04:52 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.516 15:04:52 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:27.516 ************************************ 00:05:27.516 END TEST env_memory 00:05:27.516 ************************************ 00:05:27.516 15:04:52 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:27.516 15:04:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.516 15:04:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.516 15:04:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.516 ************************************ 00:05:27.516 START TEST env_vtophys 00:05:27.516 ************************************ 00:05:27.516 15:04:52 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:27.516 EAL: lib.eal log level changed from notice to debug 00:05:27.516 EAL: Detected lcore 0 as core 0 on socket 0 00:05:27.516 EAL: Detected lcore 1 as core 1 on socket 0 00:05:27.516 EAL: Detected lcore 2 as core 2 on socket 0 00:05:27.516 EAL: Detected lcore 3 as core 3 on socket 0 00:05:27.516 EAL: Detected lcore 4 as core 4 on socket 0 00:05:27.516 EAL: Detected lcore 5 as core 5 on socket 0 00:05:27.516 EAL: Detected lcore 6 as core 6 on socket 0 00:05:27.516 EAL: Detected lcore 7 as core 8 on socket 0 00:05:27.516 EAL: Detected lcore 8 as core 9 on socket 0 00:05:27.516 EAL: Detected lcore 9 as core 10 on socket 0 00:05:27.516 EAL: Detected lcore 10 as core 11 on socket 0 00:05:27.516 EAL: Detected lcore 11 as core 12 on socket 0 00:05:27.516 EAL: Detected lcore 12 as core 13 on socket 0 00:05:27.516 EAL: Detected lcore 13 as core 14 on socket 0 00:05:27.516 EAL: Detected lcore 14 as core 16 on socket 0 00:05:27.516 EAL: Detected lcore 15 as core 17 on socket 0 00:05:27.516 EAL: Detected lcore 16 as core 18 on socket 0 00:05:27.516 EAL: Detected lcore 17 as core 19 on socket 0 00:05:27.516 EAL: Detected lcore 18 as core 20 on socket 0 00:05:27.516 EAL: Detected lcore 19 as core 21 on socket 0 00:05:27.516 EAL: Detected lcore 20 as core 22 on socket 0 00:05:27.516 EAL: Detected lcore 21 as core 24 on socket 0 00:05:27.516 EAL: Detected lcore 22 as core 25 on socket 0 00:05:27.516 EAL: Detected lcore 23 as core 26 on socket 0 00:05:27.516 EAL: Detected lcore 24 as core 27 on socket 0 00:05:27.516 EAL: Detected lcore 25 as core 28 on socket 0 00:05:27.516 EAL: Detected lcore 26 as core 29 on socket 0 00:05:27.516 EAL: Detected lcore 27 as core 30 on socket 0 00:05:27.516 EAL: Detected lcore 28 as core 0 on socket 1 00:05:27.516 EAL: Detected lcore 29 as core 1 on socket 1 00:05:27.516 EAL: Detected lcore 30 as core 2 on socket 1 00:05:27.516 EAL: Detected lcore 31 as core 3 on socket 1 00:05:27.516 EAL: Detected lcore 32 as core 4 on socket 1 00:05:27.516 EAL: Detected lcore 33 as core 5 on socket 1 00:05:27.516 EAL: Detected lcore 34 as core 6 on socket 1 00:05:27.516 EAL: Detected lcore 35 as core 8 on socket 1 00:05:27.516 EAL: Detected lcore 36 as core 9 on socket 1 00:05:27.516 EAL: Detected lcore 37 as core 10 on socket 1 00:05:27.516 EAL: Detected lcore 38 as core 11 on socket 1 00:05:27.516 EAL: Detected lcore 39 as core 12 on socket 1 00:05:27.516 EAL: Detected lcore 40 as core 13 on socket 1 00:05:27.516 EAL: Detected lcore 41 as core 14 on socket 1 00:05:27.516 EAL: Detected lcore 42 as core 16 on socket 1 00:05:27.516 EAL: Detected lcore 43 as core 17 on socket 1 00:05:27.516 EAL: Detected lcore 44 as core 18 on socket 1 00:05:27.516 EAL: Detected lcore 45 as core 19 on socket 1 00:05:27.516 EAL: Detected lcore 46 as core 20 on socket 1 00:05:27.516 EAL: Detected lcore 47 as core 21 on socket 1 00:05:27.516 EAL: Detected lcore 48 as core 22 on socket 1 00:05:27.516 EAL: Detected lcore 49 as core 24 on socket 1 00:05:27.516 EAL: Detected lcore 50 as core 25 on socket 1 00:05:27.516 EAL: Detected lcore 51 as core 26 on socket 1 00:05:27.516 EAL: Detected lcore 52 as core 27 on socket 1 00:05:27.516 EAL: Detected lcore 53 as core 28 on socket 1 00:05:27.516 EAL: Detected lcore 54 as core 29 on socket 1 00:05:27.516 EAL: Detected lcore 55 as core 30 on socket 1 00:05:27.516 EAL: Detected lcore 56 as core 0 on socket 0 00:05:27.516 EAL: Detected lcore 57 as core 1 on socket 0 00:05:27.516 EAL: Detected lcore 58 as core 2 on socket 0 00:05:27.516 EAL: Detected lcore 59 as core 3 on socket 0 00:05:27.516 EAL: Detected lcore 60 as core 4 on socket 0 00:05:27.516 EAL: Detected lcore 61 as core 5 on socket 0 00:05:27.516 EAL: Detected lcore 62 as core 6 on socket 0 00:05:27.516 EAL: Detected lcore 63 as core 8 on socket 0 00:05:27.516 EAL: Detected lcore 64 as core 9 on socket 0 00:05:27.516 EAL: Detected lcore 65 as core 10 on socket 0 00:05:27.516 EAL: Detected lcore 66 as core 11 on socket 0 00:05:27.516 EAL: Detected lcore 67 as core 12 on socket 0 00:05:27.516 EAL: Detected lcore 68 as core 13 on socket 0 00:05:27.516 EAL: Detected lcore 69 as core 14 on socket 0 00:05:27.516 EAL: Detected lcore 70 as core 16 on socket 0 00:05:27.516 EAL: Detected lcore 71 as core 17 on socket 0 00:05:27.516 EAL: Detected lcore 72 as core 18 on socket 0 00:05:27.516 EAL: Detected lcore 73 as core 19 on socket 0 00:05:27.516 EAL: Detected lcore 74 as core 20 on socket 0 00:05:27.516 EAL: Detected lcore 75 as core 21 on socket 0 00:05:27.516 EAL: Detected lcore 76 as core 22 on socket 0 00:05:27.516 EAL: Detected lcore 77 as core 24 on socket 0 00:05:27.516 EAL: Detected lcore 78 as core 25 on socket 0 00:05:27.516 EAL: Detected lcore 79 as core 26 on socket 0 00:05:27.516 EAL: Detected lcore 80 as core 27 on socket 0 00:05:27.516 EAL: Detected lcore 81 as core 28 on socket 0 00:05:27.516 EAL: Detected lcore 82 as core 29 on socket 0 00:05:27.516 EAL: Detected lcore 83 as core 30 on socket 0 00:05:27.516 EAL: Detected lcore 84 as core 0 on socket 1 00:05:27.516 EAL: Detected lcore 85 as core 1 on socket 1 00:05:27.516 EAL: Detected lcore 86 as core 2 on socket 1 00:05:27.516 EAL: Detected lcore 87 as core 3 on socket 1 00:05:27.516 EAL: Detected lcore 88 as core 4 on socket 1 00:05:27.516 EAL: Detected lcore 89 as core 5 on socket 1 00:05:27.516 EAL: Detected lcore 90 as core 6 on socket 1 00:05:27.516 EAL: Detected lcore 91 as core 8 on socket 1 00:05:27.516 EAL: Detected lcore 92 as core 9 on socket 1 00:05:27.516 EAL: Detected lcore 93 as core 10 on socket 1 00:05:27.516 EAL: Detected lcore 94 as core 11 on socket 1 00:05:27.516 EAL: Detected lcore 95 as core 12 on socket 1 00:05:27.516 EAL: Detected lcore 96 as core 13 on socket 1 00:05:27.516 EAL: Detected lcore 97 as core 14 on socket 1 00:05:27.516 EAL: Detected lcore 98 as core 16 on socket 1 00:05:27.516 EAL: Detected lcore 99 as core 17 on socket 1 00:05:27.516 EAL: Detected lcore 100 as core 18 on socket 1 00:05:27.516 EAL: Detected lcore 101 as core 19 on socket 1 00:05:27.516 EAL: Detected lcore 102 as core 20 on socket 1 00:05:27.516 EAL: Detected lcore 103 as core 21 on socket 1 00:05:27.516 EAL: Detected lcore 104 as core 22 on socket 1 00:05:27.516 EAL: Detected lcore 105 as core 24 on socket 1 00:05:27.516 EAL: Detected lcore 106 as core 25 on socket 1 00:05:27.516 EAL: Detected lcore 107 as core 26 on socket 1 00:05:27.516 EAL: Detected lcore 108 as core 27 on socket 1 00:05:27.516 EAL: Detected lcore 109 as core 28 on socket 1 00:05:27.516 EAL: Detected lcore 110 as core 29 on socket 1 00:05:27.516 EAL: Detected lcore 111 as core 30 on socket 1 00:05:27.516 EAL: Maximum logical cores by configuration: 128 00:05:27.516 EAL: Detected CPU lcores: 112 00:05:27.516 EAL: Detected NUMA nodes: 2 00:05:27.516 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:27.516 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:27.516 EAL: Checking presence of .so 'librte_eal.so' 00:05:27.516 EAL: Detected static linkage of DPDK 00:05:27.516 EAL: No shared files mode enabled, IPC will be disabled 00:05:27.776 EAL: Bus pci wants IOVA as 'DC' 00:05:27.776 EAL: Buses did not request a specific IOVA mode. 00:05:27.776 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:27.776 EAL: Selected IOVA mode 'VA' 00:05:27.776 EAL: Probing VFIO support... 00:05:27.776 EAL: IOMMU type 1 (Type 1) is supported 00:05:27.776 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:27.776 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:27.776 EAL: VFIO support initialized 00:05:27.776 EAL: Ask a virtual area of 0x2e000 bytes 00:05:27.776 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:27.776 EAL: Setting up physically contiguous memory... 00:05:27.776 EAL: Setting maximum number of open files to 524288 00:05:27.776 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:27.776 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:27.776 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:27.776 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.776 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:27.776 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.776 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.776 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:27.776 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:27.776 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.776 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:27.776 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.776 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.776 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:27.776 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:27.776 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.776 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:27.776 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.776 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.776 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:27.776 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:27.776 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.776 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:27.776 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.776 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.776 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:27.776 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:27.776 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:27.776 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.776 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:27.776 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:27.776 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.776 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:27.776 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:27.776 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.776 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:27.776 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:27.776 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.776 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:27.776 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:27.776 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.776 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:27.776 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:27.776 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.776 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:27.776 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:27.776 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.776 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:27.776 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:27.776 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.776 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:27.776 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:27.776 EAL: Hugepages will be freed exactly as allocated. 00:05:27.776 EAL: No shared files mode enabled, IPC is disabled 00:05:27.776 EAL: No shared files mode enabled, IPC is disabled 00:05:27.776 EAL: TSC frequency is ~2500000 KHz 00:05:27.776 EAL: Main lcore 0 is ready (tid=7f9bec486a00;cpuset=[0]) 00:05:27.776 EAL: Trying to obtain current memory policy. 00:05:27.776 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.776 EAL: Restoring previous memory policy: 0 00:05:27.776 EAL: request: mp_malloc_sync 00:05:27.776 EAL: No shared files mode enabled, IPC is disabled 00:05:27.776 EAL: Heap on socket 0 was expanded by 2MB 00:05:27.776 EAL: No shared files mode enabled, IPC is disabled 00:05:27.776 EAL: Mem event callback 'spdk:(nil)' registered 00:05:27.776 00:05:27.776 00:05:27.776 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.776 http://cunit.sourceforge.net/ 00:05:27.776 00:05:27.776 00:05:27.776 Suite: components_suite 00:05:27.776 Test: vtophys_malloc_test ...passed 00:05:27.776 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:27.776 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.776 EAL: Restoring previous memory policy: 4 00:05:27.776 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.776 EAL: request: mp_malloc_sync 00:05:27.776 EAL: No shared files mode enabled, IPC is disabled 00:05:27.776 EAL: Heap on socket 0 was expanded by 4MB 00:05:27.776 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.776 EAL: request: mp_malloc_sync 00:05:27.776 EAL: No shared files mode enabled, IPC is disabled 00:05:27.776 EAL: Heap on socket 0 was shrunk by 4MB 00:05:27.776 EAL: Trying to obtain current memory policy. 00:05:27.776 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.776 EAL: Restoring previous memory policy: 4 00:05:27.776 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.776 EAL: request: mp_malloc_sync 00:05:27.776 EAL: No shared files mode enabled, IPC is disabled 00:05:27.776 EAL: Heap on socket 0 was expanded by 6MB 00:05:27.776 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.776 EAL: request: mp_malloc_sync 00:05:27.776 EAL: No shared files mode enabled, IPC is disabled 00:05:27.776 EAL: Heap on socket 0 was shrunk by 6MB 00:05:27.776 EAL: Trying to obtain current memory policy. 00:05:27.776 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.776 EAL: Restoring previous memory policy: 4 00:05:27.776 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.776 EAL: request: mp_malloc_sync 00:05:27.776 EAL: No shared files mode enabled, IPC is disabled 00:05:27.776 EAL: Heap on socket 0 was expanded by 10MB 00:05:27.776 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.776 EAL: request: mp_malloc_sync 00:05:27.776 EAL: No shared files mode enabled, IPC is disabled 00:05:27.776 EAL: Heap on socket 0 was shrunk by 10MB 00:05:27.776 EAL: Trying to obtain current memory policy. 00:05:27.776 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.776 EAL: Restoring previous memory policy: 4 00:05:27.776 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.776 EAL: request: mp_malloc_sync 00:05:27.776 EAL: No shared files mode enabled, IPC is disabled 00:05:27.776 EAL: Heap on socket 0 was expanded by 18MB 00:05:27.776 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.776 EAL: request: mp_malloc_sync 00:05:27.776 EAL: No shared files mode enabled, IPC is disabled 00:05:27.776 EAL: Heap on socket 0 was shrunk by 18MB 00:05:27.776 EAL: Trying to obtain current memory policy. 00:05:27.776 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.776 EAL: Restoring previous memory policy: 4 00:05:27.776 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.776 EAL: request: mp_malloc_sync 00:05:27.776 EAL: No shared files mode enabled, IPC is disabled 00:05:27.776 EAL: Heap on socket 0 was expanded by 34MB 00:05:27.776 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.776 EAL: request: mp_malloc_sync 00:05:27.776 EAL: No shared files mode enabled, IPC is disabled 00:05:27.776 EAL: Heap on socket 0 was shrunk by 34MB 00:05:27.776 EAL: Trying to obtain current memory policy. 00:05:27.776 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.776 EAL: Restoring previous memory policy: 4 00:05:27.776 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.776 EAL: request: mp_malloc_sync 00:05:27.776 EAL: No shared files mode enabled, IPC is disabled 00:05:27.776 EAL: Heap on socket 0 was expanded by 66MB 00:05:27.776 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.776 EAL: request: mp_malloc_sync 00:05:27.776 EAL: No shared files mode enabled, IPC is disabled 00:05:27.776 EAL: Heap on socket 0 was shrunk by 66MB 00:05:27.776 EAL: Trying to obtain current memory policy. 00:05:27.776 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.776 EAL: Restoring previous memory policy: 4 00:05:27.776 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.776 EAL: request: mp_malloc_sync 00:05:27.776 EAL: No shared files mode enabled, IPC is disabled 00:05:27.776 EAL: Heap on socket 0 was expanded by 130MB 00:05:27.776 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.776 EAL: request: mp_malloc_sync 00:05:27.776 EAL: No shared files mode enabled, IPC is disabled 00:05:27.776 EAL: Heap on socket 0 was shrunk by 130MB 00:05:27.776 EAL: Trying to obtain current memory policy. 00:05:27.776 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.776 EAL: Restoring previous memory policy: 4 00:05:27.776 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.776 EAL: request: mp_malloc_sync 00:05:27.776 EAL: No shared files mode enabled, IPC is disabled 00:05:27.776 EAL: Heap on socket 0 was expanded by 258MB 00:05:27.776 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.036 EAL: request: mp_malloc_sync 00:05:28.036 EAL: No shared files mode enabled, IPC is disabled 00:05:28.036 EAL: Heap on socket 0 was shrunk by 258MB 00:05:28.036 EAL: Trying to obtain current memory policy. 00:05:28.036 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.036 EAL: Restoring previous memory policy: 4 00:05:28.036 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.036 EAL: request: mp_malloc_sync 00:05:28.036 EAL: No shared files mode enabled, IPC is disabled 00:05:28.036 EAL: Heap on socket 0 was expanded by 514MB 00:05:28.036 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.036 EAL: request: mp_malloc_sync 00:05:28.036 EAL: No shared files mode enabled, IPC is disabled 00:05:28.036 EAL: Heap on socket 0 was shrunk by 514MB 00:05:28.036 EAL: Trying to obtain current memory policy. 00:05:28.036 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.295 EAL: Restoring previous memory policy: 4 00:05:28.295 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.295 EAL: request: mp_malloc_sync 00:05:28.295 EAL: No shared files mode enabled, IPC is disabled 00:05:28.295 EAL: Heap on socket 0 was expanded by 1026MB 00:05:28.556 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.556 EAL: request: mp_malloc_sync 00:05:28.556 EAL: No shared files mode enabled, IPC is disabled 00:05:28.556 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:28.556 passed 00:05:28.556 00:05:28.556 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.556 suites 1 1 n/a 0 0 00:05:28.556 tests 2 2 2 0 0 00:05:28.556 asserts 497 497 497 0 n/a 00:05:28.556 00:05:28.556 Elapsed time = 0.956 seconds 00:05:28.556 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.556 EAL: request: mp_malloc_sync 00:05:28.556 EAL: No shared files mode enabled, IPC is disabled 00:05:28.556 EAL: Heap on socket 0 was shrunk by 2MB 00:05:28.556 EAL: No shared files mode enabled, IPC is disabled 00:05:28.556 EAL: No shared files mode enabled, IPC is disabled 00:05:28.556 EAL: No shared files mode enabled, IPC is disabled 00:05:28.556 00:05:28.556 real 0m1.079s 00:05:28.556 user 0m0.633s 00:05:28.556 sys 0m0.423s 00:05:28.556 15:04:53 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.556 15:04:53 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:28.556 ************************************ 00:05:28.556 END TEST env_vtophys 00:05:28.556 ************************************ 00:05:28.815 15:04:53 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:28.815 15:04:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.815 15:04:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.815 15:04:53 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.815 ************************************ 00:05:28.815 START TEST env_pci 00:05:28.815 ************************************ 00:05:28.815 15:04:53 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:28.815 00:05:28.815 00:05:28.815 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.815 http://cunit.sourceforge.net/ 00:05:28.815 00:05:28.815 00:05:28.815 Suite: pci 00:05:28.815 Test: pci_hook ...[2024-11-27 15:04:53.975557] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1118:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2350226 has claimed it 00:05:28.815 EAL: Cannot find device (10000:00:01.0) 00:05:28.815 EAL: Failed to attach device on primary process 00:05:28.815 passed 00:05:28.815 00:05:28.815 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.815 suites 1 1 n/a 0 0 00:05:28.815 tests 1 1 1 0 0 00:05:28.815 asserts 25 25 25 0 n/a 00:05:28.815 00:05:28.815 Elapsed time = 0.037 seconds 00:05:28.815 00:05:28.815 real 0m0.054s 00:05:28.815 user 0m0.017s 00:05:28.815 sys 0m0.038s 00:05:28.815 15:04:54 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.815 15:04:54 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:28.815 ************************************ 00:05:28.815 END TEST env_pci 00:05:28.815 ************************************ 00:05:28.815 15:04:54 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:28.815 15:04:54 env -- env/env.sh@15 -- # uname 00:05:28.815 15:04:54 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:28.815 15:04:54 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:28.815 15:04:54 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:28.815 15:04:54 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:28.815 15:04:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.815 15:04:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.815 ************************************ 00:05:28.815 START TEST env_dpdk_post_init 00:05:28.815 ************************************ 00:05:28.815 15:04:54 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:28.815 EAL: Detected CPU lcores: 112 00:05:28.815 EAL: Detected NUMA nodes: 2 00:05:28.815 EAL: Detected static linkage of DPDK 00:05:28.815 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.074 EAL: Selected IOVA mode 'VA' 00:05:29.074 EAL: VFIO support initialized 00:05:29.074 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.074 EAL: Using IOMMU type 1 (Type 1) 00:05:29.643 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:33.833 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:33.833 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:05:33.833 Starting DPDK initialization... 00:05:33.833 Starting SPDK post initialization... 00:05:33.833 SPDK NVMe probe 00:05:33.833 Attaching to 0000:d8:00.0 00:05:33.833 Attached to 0000:d8:00.0 00:05:33.833 Cleaning up... 00:05:33.833 00:05:33.833 real 0m4.679s 00:05:33.833 user 0m3.337s 00:05:33.833 sys 0m0.587s 00:05:33.833 15:04:58 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.833 15:04:58 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:33.833 ************************************ 00:05:33.833 END TEST env_dpdk_post_init 00:05:33.833 ************************************ 00:05:33.833 15:04:58 env -- env/env.sh@26 -- # uname 00:05:33.833 15:04:58 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:33.833 15:04:58 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:33.833 15:04:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.833 15:04:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.833 15:04:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.833 ************************************ 00:05:33.833 START TEST env_mem_callbacks 00:05:33.833 ************************************ 00:05:33.833 15:04:58 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:33.833 EAL: Detected CPU lcores: 112 00:05:33.833 EAL: Detected NUMA nodes: 2 00:05:33.833 EAL: Detected static linkage of DPDK 00:05:33.833 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:33.833 EAL: Selected IOVA mode 'VA' 00:05:33.833 EAL: VFIO support initialized 00:05:33.833 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:33.833 00:05:33.833 00:05:33.833 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.833 http://cunit.sourceforge.net/ 00:05:33.833 00:05:33.833 00:05:33.833 Suite: memory 00:05:33.833 Test: test ... 00:05:33.833 register 0x200000200000 2097152 00:05:33.833 malloc 3145728 00:05:33.833 register 0x200000400000 4194304 00:05:33.833 buf 0x200000500000 len 3145728 PASSED 00:05:33.833 malloc 64 00:05:33.833 buf 0x2000004fff40 len 64 PASSED 00:05:33.833 malloc 4194304 00:05:33.833 register 0x200000800000 6291456 00:05:33.833 buf 0x200000a00000 len 4194304 PASSED 00:05:33.833 free 0x200000500000 3145728 00:05:33.833 free 0x2000004fff40 64 00:05:33.833 unregister 0x200000400000 4194304 PASSED 00:05:33.833 free 0x200000a00000 4194304 00:05:33.833 unregister 0x200000800000 6291456 PASSED 00:05:33.833 malloc 8388608 00:05:33.833 register 0x200000400000 10485760 00:05:33.833 buf 0x200000600000 len 8388608 PASSED 00:05:33.833 free 0x200000600000 8388608 00:05:33.833 unregister 0x200000400000 10485760 PASSED 00:05:33.833 passed 00:05:33.833 00:05:33.833 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.833 suites 1 1 n/a 0 0 00:05:33.834 tests 1 1 1 0 0 00:05:33.834 asserts 15 15 15 0 n/a 00:05:33.834 00:05:33.834 Elapsed time = 0.006 seconds 00:05:33.834 00:05:33.834 real 0m0.065s 00:05:33.834 user 0m0.023s 00:05:33.834 sys 0m0.042s 00:05:33.834 15:04:58 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.834 15:04:58 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:33.834 ************************************ 00:05:33.834 END TEST env_mem_callbacks 00:05:33.834 ************************************ 00:05:33.834 00:05:33.834 real 0m6.597s 00:05:33.834 user 0m4.352s 00:05:33.834 sys 0m1.517s 00:05:33.834 15:04:58 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.834 15:04:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.834 ************************************ 00:05:33.834 END TEST env 00:05:33.834 ************************************ 00:05:33.834 15:04:59 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:33.834 15:04:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.834 15:04:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.834 15:04:59 -- common/autotest_common.sh@10 -- # set +x 00:05:33.834 ************************************ 00:05:33.834 START TEST rpc 00:05:33.834 ************************************ 00:05:33.834 15:04:59 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:33.834 * Looking for test storage... 00:05:34.093 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:34.093 15:04:59 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.093 15:04:59 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.093 15:04:59 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.093 15:04:59 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.093 15:04:59 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.093 15:04:59 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.093 15:04:59 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.093 15:04:59 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.093 15:04:59 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.093 15:04:59 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.094 15:04:59 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.094 15:04:59 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.094 15:04:59 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.094 15:04:59 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.094 15:04:59 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.094 15:04:59 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:34.094 15:04:59 rpc -- scripts/common.sh@345 -- # : 1 00:05:34.094 15:04:59 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.094 15:04:59 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.094 15:04:59 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:34.094 15:04:59 rpc -- scripts/common.sh@353 -- # local d=1 00:05:34.094 15:04:59 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.094 15:04:59 rpc -- scripts/common.sh@355 -- # echo 1 00:05:34.094 15:04:59 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.094 15:04:59 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:34.094 15:04:59 rpc -- scripts/common.sh@353 -- # local d=2 00:05:34.094 15:04:59 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.094 15:04:59 rpc -- scripts/common.sh@355 -- # echo 2 00:05:34.094 15:04:59 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.094 15:04:59 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.094 15:04:59 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.094 15:04:59 rpc -- scripts/common.sh@368 -- # return 0 00:05:34.094 15:04:59 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.094 15:04:59 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.094 --rc genhtml_branch_coverage=1 00:05:34.094 --rc genhtml_function_coverage=1 00:05:34.094 --rc genhtml_legend=1 00:05:34.094 --rc geninfo_all_blocks=1 00:05:34.094 --rc geninfo_unexecuted_blocks=1 00:05:34.094 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:34.094 ' 00:05:34.094 15:04:59 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.094 --rc genhtml_branch_coverage=1 00:05:34.094 --rc genhtml_function_coverage=1 00:05:34.094 --rc genhtml_legend=1 00:05:34.094 --rc geninfo_all_blocks=1 00:05:34.094 --rc geninfo_unexecuted_blocks=1 00:05:34.094 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:34.094 ' 00:05:34.094 15:04:59 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.094 --rc genhtml_branch_coverage=1 00:05:34.094 --rc genhtml_function_coverage=1 00:05:34.094 --rc genhtml_legend=1 00:05:34.094 --rc geninfo_all_blocks=1 00:05:34.094 --rc geninfo_unexecuted_blocks=1 00:05:34.094 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:34.094 ' 00:05:34.094 15:04:59 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.094 --rc genhtml_branch_coverage=1 00:05:34.094 --rc genhtml_function_coverage=1 00:05:34.094 --rc genhtml_legend=1 00:05:34.094 --rc geninfo_all_blocks=1 00:05:34.094 --rc geninfo_unexecuted_blocks=1 00:05:34.094 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:34.094 ' 00:05:34.094 15:04:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2351369 00:05:34.094 15:04:59 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:34.094 15:04:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.094 15:04:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2351369 00:05:34.094 15:04:59 rpc -- common/autotest_common.sh@835 -- # '[' -z 2351369 ']' 00:05:34.094 15:04:59 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.094 15:04:59 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.094 15:04:59 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.094 15:04:59 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.094 15:04:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.094 [2024-11-27 15:04:59.291355] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:05:34.094 [2024-11-27 15:04:59.291419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2351369 ] 00:05:34.094 [2024-11-27 15:04:59.361132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.094 [2024-11-27 15:04:59.402901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:34.094 [2024-11-27 15:04:59.402938] app.c: 616:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2351369' to capture a snapshot of events at runtime. 00:05:34.094 [2024-11-27 15:04:59.402947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:34.094 [2024-11-27 15:04:59.402955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:34.094 [2024-11-27 15:04:59.402962] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2351369 for offline analysis/debug. 00:05:34.094 [2024-11-27 15:04:59.403604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.353 15:04:59 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.353 15:04:59 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:34.353 15:04:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:34.353 15:04:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:34.353 15:04:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:34.353 15:04:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:34.353 15:04:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.353 15:04:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.353 15:04:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.353 ************************************ 00:05:34.353 START TEST rpc_integrity 00:05:34.353 ************************************ 00:05:34.353 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:34.353 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:34.353 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.353 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.353 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.353 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:34.353 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:34.612 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:34.612 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:34.612 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.612 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.612 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.612 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:34.612 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:34.612 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.612 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.612 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.612 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:34.612 { 00:05:34.612 "name": "Malloc0", 00:05:34.612 "aliases": [ 00:05:34.612 "9bd98a4d-c80c-403c-9331-3b13c0816b80" 00:05:34.612 ], 00:05:34.612 "product_name": "Malloc disk", 00:05:34.612 "block_size": 512, 00:05:34.612 "num_blocks": 16384, 00:05:34.612 "uuid": "9bd98a4d-c80c-403c-9331-3b13c0816b80", 00:05:34.612 "assigned_rate_limits": { 00:05:34.612 "rw_ios_per_sec": 0, 00:05:34.612 "rw_mbytes_per_sec": 0, 00:05:34.612 "r_mbytes_per_sec": 0, 00:05:34.612 "w_mbytes_per_sec": 0 00:05:34.612 }, 00:05:34.612 "claimed": false, 00:05:34.612 "zoned": false, 00:05:34.612 "supported_io_types": { 00:05:34.612 "read": true, 00:05:34.612 "write": true, 00:05:34.612 "unmap": true, 00:05:34.612 "flush": true, 00:05:34.612 "reset": true, 00:05:34.612 "nvme_admin": false, 00:05:34.612 "nvme_io": false, 00:05:34.612 "nvme_io_md": false, 00:05:34.612 "write_zeroes": true, 00:05:34.612 "zcopy": true, 00:05:34.612 "get_zone_info": false, 00:05:34.612 "zone_management": false, 00:05:34.612 "zone_append": false, 00:05:34.612 "compare": false, 00:05:34.612 "compare_and_write": false, 00:05:34.612 "abort": true, 00:05:34.612 "seek_hole": false, 00:05:34.612 "seek_data": false, 00:05:34.612 "copy": true, 00:05:34.612 "nvme_iov_md": false 00:05:34.612 }, 00:05:34.612 "memory_domains": [ 00:05:34.612 { 00:05:34.612 "dma_device_id": "system", 00:05:34.612 "dma_device_type": 1 00:05:34.612 }, 00:05:34.612 { 00:05:34.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.612 "dma_device_type": 2 00:05:34.612 } 00:05:34.612 ], 00:05:34.612 "driver_specific": {} 00:05:34.612 } 00:05:34.612 ]' 00:05:34.612 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:34.612 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:34.612 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:34.612 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.612 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.612 [2024-11-27 15:04:59.795544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:34.612 [2024-11-27 15:04:59.795578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:34.612 [2024-11-27 15:04:59.795596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x567f280 00:05:34.612 [2024-11-27 15:04:59.795611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:34.612 [2024-11-27 15:04:59.796497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:34.612 [2024-11-27 15:04:59.796520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:34.612 Passthru0 00:05:34.612 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.612 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:34.612 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.613 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.613 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.613 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:34.613 { 00:05:34.613 "name": "Malloc0", 00:05:34.613 "aliases": [ 00:05:34.613 "9bd98a4d-c80c-403c-9331-3b13c0816b80" 00:05:34.613 ], 00:05:34.613 "product_name": "Malloc disk", 00:05:34.613 "block_size": 512, 00:05:34.613 "num_blocks": 16384, 00:05:34.613 "uuid": "9bd98a4d-c80c-403c-9331-3b13c0816b80", 00:05:34.613 "assigned_rate_limits": { 00:05:34.613 "rw_ios_per_sec": 0, 00:05:34.613 "rw_mbytes_per_sec": 0, 00:05:34.613 "r_mbytes_per_sec": 0, 00:05:34.613 "w_mbytes_per_sec": 0 00:05:34.613 }, 00:05:34.613 "claimed": true, 00:05:34.613 "claim_type": "exclusive_write", 00:05:34.613 "zoned": false, 00:05:34.613 "supported_io_types": { 00:05:34.613 "read": true, 00:05:34.613 "write": true, 00:05:34.613 "unmap": true, 00:05:34.613 "flush": true, 00:05:34.613 "reset": true, 00:05:34.613 "nvme_admin": false, 00:05:34.613 "nvme_io": false, 00:05:34.613 "nvme_io_md": false, 00:05:34.613 "write_zeroes": true, 00:05:34.613 "zcopy": true, 00:05:34.613 "get_zone_info": false, 00:05:34.613 "zone_management": false, 00:05:34.613 "zone_append": false, 00:05:34.613 "compare": false, 00:05:34.613 "compare_and_write": false, 00:05:34.613 "abort": true, 00:05:34.613 "seek_hole": false, 00:05:34.613 "seek_data": false, 00:05:34.613 "copy": true, 00:05:34.613 "nvme_iov_md": false 00:05:34.613 }, 00:05:34.613 "memory_domains": [ 00:05:34.613 { 00:05:34.613 "dma_device_id": "system", 00:05:34.613 "dma_device_type": 1 00:05:34.613 }, 00:05:34.613 { 00:05:34.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.613 "dma_device_type": 2 00:05:34.613 } 00:05:34.613 ], 00:05:34.613 "driver_specific": {} 00:05:34.613 }, 00:05:34.613 { 00:05:34.613 "name": "Passthru0", 00:05:34.613 "aliases": [ 00:05:34.613 "35a90017-fcf8-56c4-b535-6cbe98bf0627" 00:05:34.613 ], 00:05:34.613 "product_name": "passthru", 00:05:34.613 "block_size": 512, 00:05:34.613 "num_blocks": 16384, 00:05:34.613 "uuid": "35a90017-fcf8-56c4-b535-6cbe98bf0627", 00:05:34.613 "assigned_rate_limits": { 00:05:34.613 "rw_ios_per_sec": 0, 00:05:34.613 "rw_mbytes_per_sec": 0, 00:05:34.613 "r_mbytes_per_sec": 0, 00:05:34.613 "w_mbytes_per_sec": 0 00:05:34.613 }, 00:05:34.613 "claimed": false, 00:05:34.613 "zoned": false, 00:05:34.613 "supported_io_types": { 00:05:34.613 "read": true, 00:05:34.613 "write": true, 00:05:34.613 "unmap": true, 00:05:34.613 "flush": true, 00:05:34.613 "reset": true, 00:05:34.613 "nvme_admin": false, 00:05:34.613 "nvme_io": false, 00:05:34.613 "nvme_io_md": false, 00:05:34.613 "write_zeroes": true, 00:05:34.613 "zcopy": true, 00:05:34.613 "get_zone_info": false, 00:05:34.613 "zone_management": false, 00:05:34.613 "zone_append": false, 00:05:34.613 "compare": false, 00:05:34.613 "compare_and_write": false, 00:05:34.613 "abort": true, 00:05:34.613 "seek_hole": false, 00:05:34.613 "seek_data": false, 00:05:34.613 "copy": true, 00:05:34.613 "nvme_iov_md": false 00:05:34.613 }, 00:05:34.613 "memory_domains": [ 00:05:34.613 { 00:05:34.613 "dma_device_id": "system", 00:05:34.613 "dma_device_type": 1 00:05:34.613 }, 00:05:34.613 { 00:05:34.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.613 "dma_device_type": 2 00:05:34.613 } 00:05:34.613 ], 00:05:34.613 "driver_specific": { 00:05:34.613 "passthru": { 00:05:34.613 "name": "Passthru0", 00:05:34.613 "base_bdev_name": "Malloc0" 00:05:34.613 } 00:05:34.613 } 00:05:34.613 } 00:05:34.613 ]' 00:05:34.613 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:34.613 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:34.613 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:34.613 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.613 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.613 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.613 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:34.613 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.613 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.613 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.613 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:34.613 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.613 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.613 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.613 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:34.613 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:34.872 15:04:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:34.872 00:05:34.872 real 0m0.290s 00:05:34.872 user 0m0.177s 00:05:34.872 sys 0m0.053s 00:05:34.872 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.872 15:04:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.872 ************************************ 00:05:34.872 END TEST rpc_integrity 00:05:34.872 ************************************ 00:05:34.872 15:04:59 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:34.872 15:04:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.872 15:04:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.872 15:04:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.872 ************************************ 00:05:34.872 START TEST rpc_plugins 00:05:34.872 ************************************ 00:05:34.872 15:05:00 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:34.872 15:05:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:34.872 15:05:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.872 15:05:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.872 15:05:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.872 15:05:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:34.872 15:05:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:34.872 15:05:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.872 15:05:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.872 15:05:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.872 15:05:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:34.872 { 00:05:34.872 "name": "Malloc1", 00:05:34.872 "aliases": [ 00:05:34.872 "8143f4d6-eb69-4644-ab49-a58594075165" 00:05:34.872 ], 00:05:34.872 "product_name": "Malloc disk", 00:05:34.872 "block_size": 4096, 00:05:34.872 "num_blocks": 256, 00:05:34.872 "uuid": "8143f4d6-eb69-4644-ab49-a58594075165", 00:05:34.872 "assigned_rate_limits": { 00:05:34.872 "rw_ios_per_sec": 0, 00:05:34.872 "rw_mbytes_per_sec": 0, 00:05:34.872 "r_mbytes_per_sec": 0, 00:05:34.872 "w_mbytes_per_sec": 0 00:05:34.872 }, 00:05:34.872 "claimed": false, 00:05:34.872 "zoned": false, 00:05:34.872 "supported_io_types": { 00:05:34.872 "read": true, 00:05:34.872 "write": true, 00:05:34.872 "unmap": true, 00:05:34.872 "flush": true, 00:05:34.872 "reset": true, 00:05:34.872 "nvme_admin": false, 00:05:34.872 "nvme_io": false, 00:05:34.872 "nvme_io_md": false, 00:05:34.872 "write_zeroes": true, 00:05:34.872 "zcopy": true, 00:05:34.872 "get_zone_info": false, 00:05:34.872 "zone_management": false, 00:05:34.872 "zone_append": false, 00:05:34.872 "compare": false, 00:05:34.872 "compare_and_write": false, 00:05:34.872 "abort": true, 00:05:34.872 "seek_hole": false, 00:05:34.872 "seek_data": false, 00:05:34.872 "copy": true, 00:05:34.872 "nvme_iov_md": false 00:05:34.872 }, 00:05:34.872 "memory_domains": [ 00:05:34.872 { 00:05:34.872 "dma_device_id": "system", 00:05:34.872 "dma_device_type": 1 00:05:34.872 }, 00:05:34.872 { 00:05:34.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.872 "dma_device_type": 2 00:05:34.872 } 00:05:34.872 ], 00:05:34.872 "driver_specific": {} 00:05:34.872 } 00:05:34.872 ]' 00:05:34.872 15:05:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:34.872 15:05:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:34.872 15:05:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:34.872 15:05:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.872 15:05:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.872 15:05:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.872 15:05:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:34.872 15:05:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.872 15:05:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.872 15:05:00 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.872 15:05:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:34.872 15:05:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:34.872 15:05:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:34.872 00:05:34.872 real 0m0.151s 00:05:34.872 user 0m0.091s 00:05:34.872 sys 0m0.025s 00:05:34.872 15:05:00 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.872 15:05:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.872 ************************************ 00:05:34.872 END TEST rpc_plugins 00:05:34.872 ************************************ 00:05:35.131 15:05:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:35.131 15:05:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.131 15:05:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.131 15:05:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.131 ************************************ 00:05:35.131 START TEST rpc_trace_cmd_test 00:05:35.131 ************************************ 00:05:35.131 15:05:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:35.131 15:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:35.131 15:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:35.131 15:05:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.131 15:05:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:35.131 15:05:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.131 15:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:35.131 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2351369", 00:05:35.131 "tpoint_group_mask": "0x8", 00:05:35.131 "iscsi_conn": { 00:05:35.131 "mask": "0x2", 00:05:35.131 "tpoint_mask": "0x0" 00:05:35.131 }, 00:05:35.131 "scsi": { 00:05:35.131 "mask": "0x4", 00:05:35.131 "tpoint_mask": "0x0" 00:05:35.131 }, 00:05:35.131 "bdev": { 00:05:35.131 "mask": "0x8", 00:05:35.131 "tpoint_mask": "0xffffffffffffffff" 00:05:35.131 }, 00:05:35.131 "nvmf_rdma": { 00:05:35.131 "mask": "0x10", 00:05:35.131 "tpoint_mask": "0x0" 00:05:35.131 }, 00:05:35.131 "nvmf_tcp": { 00:05:35.131 "mask": "0x20", 00:05:35.131 "tpoint_mask": "0x0" 00:05:35.131 }, 00:05:35.131 "ftl": { 00:05:35.131 "mask": "0x40", 00:05:35.131 "tpoint_mask": "0x0" 00:05:35.131 }, 00:05:35.131 "blobfs": { 00:05:35.131 "mask": "0x80", 00:05:35.131 "tpoint_mask": "0x0" 00:05:35.131 }, 00:05:35.131 "dsa": { 00:05:35.131 "mask": "0x200", 00:05:35.131 "tpoint_mask": "0x0" 00:05:35.131 }, 00:05:35.131 "thread": { 00:05:35.131 "mask": "0x400", 00:05:35.131 "tpoint_mask": "0x0" 00:05:35.131 }, 00:05:35.131 "nvme_pcie": { 00:05:35.131 "mask": "0x800", 00:05:35.131 "tpoint_mask": "0x0" 00:05:35.131 }, 00:05:35.131 "iaa": { 00:05:35.131 "mask": "0x1000", 00:05:35.131 "tpoint_mask": "0x0" 00:05:35.131 }, 00:05:35.131 "nvme_tcp": { 00:05:35.131 "mask": "0x2000", 00:05:35.131 "tpoint_mask": "0x0" 00:05:35.131 }, 00:05:35.131 "bdev_nvme": { 00:05:35.131 "mask": "0x4000", 00:05:35.131 "tpoint_mask": "0x0" 00:05:35.131 }, 00:05:35.131 "sock": { 00:05:35.131 "mask": "0x8000", 00:05:35.131 "tpoint_mask": "0x0" 00:05:35.131 }, 00:05:35.131 "blob": { 00:05:35.131 "mask": "0x10000", 00:05:35.131 "tpoint_mask": "0x0" 00:05:35.131 }, 00:05:35.131 "bdev_raid": { 00:05:35.131 "mask": "0x20000", 00:05:35.131 "tpoint_mask": "0x0" 00:05:35.131 }, 00:05:35.131 "scheduler": { 00:05:35.131 "mask": "0x40000", 00:05:35.131 "tpoint_mask": "0x0" 00:05:35.131 } 00:05:35.131 }' 00:05:35.131 15:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:35.131 15:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:35.131 15:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:35.131 15:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:35.131 15:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:35.131 15:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:35.131 15:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:35.131 15:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:35.131 15:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:35.390 15:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:35.391 00:05:35.391 real 0m0.229s 00:05:35.391 user 0m0.185s 00:05:35.391 sys 0m0.038s 00:05:35.391 15:05:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.391 15:05:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:35.391 ************************************ 00:05:35.391 END TEST rpc_trace_cmd_test 00:05:35.391 ************************************ 00:05:35.391 15:05:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:35.391 15:05:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:35.391 15:05:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:35.391 15:05:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.391 15:05:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.391 15:05:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.391 ************************************ 00:05:35.391 START TEST rpc_daemon_integrity 00:05:35.391 ************************************ 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:35.391 { 00:05:35.391 "name": "Malloc2", 00:05:35.391 "aliases": [ 00:05:35.391 "fff9325c-0333-466c-80cb-60851c1da368" 00:05:35.391 ], 00:05:35.391 "product_name": "Malloc disk", 00:05:35.391 "block_size": 512, 00:05:35.391 "num_blocks": 16384, 00:05:35.391 "uuid": "fff9325c-0333-466c-80cb-60851c1da368", 00:05:35.391 "assigned_rate_limits": { 00:05:35.391 "rw_ios_per_sec": 0, 00:05:35.391 "rw_mbytes_per_sec": 0, 00:05:35.391 "r_mbytes_per_sec": 0, 00:05:35.391 "w_mbytes_per_sec": 0 00:05:35.391 }, 00:05:35.391 "claimed": false, 00:05:35.391 "zoned": false, 00:05:35.391 "supported_io_types": { 00:05:35.391 "read": true, 00:05:35.391 "write": true, 00:05:35.391 "unmap": true, 00:05:35.391 "flush": true, 00:05:35.391 "reset": true, 00:05:35.391 "nvme_admin": false, 00:05:35.391 "nvme_io": false, 00:05:35.391 "nvme_io_md": false, 00:05:35.391 "write_zeroes": true, 00:05:35.391 "zcopy": true, 00:05:35.391 "get_zone_info": false, 00:05:35.391 "zone_management": false, 00:05:35.391 "zone_append": false, 00:05:35.391 "compare": false, 00:05:35.391 "compare_and_write": false, 00:05:35.391 "abort": true, 00:05:35.391 "seek_hole": false, 00:05:35.391 "seek_data": false, 00:05:35.391 "copy": true, 00:05:35.391 "nvme_iov_md": false 00:05:35.391 }, 00:05:35.391 "memory_domains": [ 00:05:35.391 { 00:05:35.391 "dma_device_id": "system", 00:05:35.391 "dma_device_type": 1 00:05:35.391 }, 00:05:35.391 { 00:05:35.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.391 "dma_device_type": 2 00:05:35.391 } 00:05:35.391 ], 00:05:35.391 "driver_specific": {} 00:05:35.391 } 00:05:35.391 ]' 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.391 [2024-11-27 15:05:00.713921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:35.391 [2024-11-27 15:05:00.713952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:35.391 [2024-11-27 15:05:00.713971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x56748b0 00:05:35.391 [2024-11-27 15:05:00.713982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:35.391 [2024-11-27 15:05:00.714725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:35.391 [2024-11-27 15:05:00.714745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:35.391 Passthru0 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.391 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:35.650 { 00:05:35.650 "name": "Malloc2", 00:05:35.650 "aliases": [ 00:05:35.650 "fff9325c-0333-466c-80cb-60851c1da368" 00:05:35.650 ], 00:05:35.650 "product_name": "Malloc disk", 00:05:35.650 "block_size": 512, 00:05:35.650 "num_blocks": 16384, 00:05:35.650 "uuid": "fff9325c-0333-466c-80cb-60851c1da368", 00:05:35.650 "assigned_rate_limits": { 00:05:35.650 "rw_ios_per_sec": 0, 00:05:35.650 "rw_mbytes_per_sec": 0, 00:05:35.650 "r_mbytes_per_sec": 0, 00:05:35.650 "w_mbytes_per_sec": 0 00:05:35.650 }, 00:05:35.650 "claimed": true, 00:05:35.650 "claim_type": "exclusive_write", 00:05:35.650 "zoned": false, 00:05:35.650 "supported_io_types": { 00:05:35.650 "read": true, 00:05:35.650 "write": true, 00:05:35.650 "unmap": true, 00:05:35.650 "flush": true, 00:05:35.650 "reset": true, 00:05:35.650 "nvme_admin": false, 00:05:35.650 "nvme_io": false, 00:05:35.650 "nvme_io_md": false, 00:05:35.650 "write_zeroes": true, 00:05:35.650 "zcopy": true, 00:05:35.650 "get_zone_info": false, 00:05:35.650 "zone_management": false, 00:05:35.650 "zone_append": false, 00:05:35.650 "compare": false, 00:05:35.650 "compare_and_write": false, 00:05:35.650 "abort": true, 00:05:35.650 "seek_hole": false, 00:05:35.650 "seek_data": false, 00:05:35.650 "copy": true, 00:05:35.650 "nvme_iov_md": false 00:05:35.650 }, 00:05:35.650 "memory_domains": [ 00:05:35.650 { 00:05:35.650 "dma_device_id": "system", 00:05:35.650 "dma_device_type": 1 00:05:35.650 }, 00:05:35.650 { 00:05:35.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.650 "dma_device_type": 2 00:05:35.650 } 00:05:35.650 ], 00:05:35.650 "driver_specific": {} 00:05:35.650 }, 00:05:35.650 { 00:05:35.650 "name": "Passthru0", 00:05:35.650 "aliases": [ 00:05:35.650 "3c469daa-ba81-576c-92dd-16659c53fefc" 00:05:35.650 ], 00:05:35.650 "product_name": "passthru", 00:05:35.650 "block_size": 512, 00:05:35.650 "num_blocks": 16384, 00:05:35.650 "uuid": "3c469daa-ba81-576c-92dd-16659c53fefc", 00:05:35.650 "assigned_rate_limits": { 00:05:35.650 "rw_ios_per_sec": 0, 00:05:35.650 "rw_mbytes_per_sec": 0, 00:05:35.650 "r_mbytes_per_sec": 0, 00:05:35.650 "w_mbytes_per_sec": 0 00:05:35.650 }, 00:05:35.650 "claimed": false, 00:05:35.650 "zoned": false, 00:05:35.650 "supported_io_types": { 00:05:35.650 "read": true, 00:05:35.650 "write": true, 00:05:35.650 "unmap": true, 00:05:35.650 "flush": true, 00:05:35.650 "reset": true, 00:05:35.650 "nvme_admin": false, 00:05:35.650 "nvme_io": false, 00:05:35.650 "nvme_io_md": false, 00:05:35.650 "write_zeroes": true, 00:05:35.650 "zcopy": true, 00:05:35.650 "get_zone_info": false, 00:05:35.650 "zone_management": false, 00:05:35.650 "zone_append": false, 00:05:35.650 "compare": false, 00:05:35.650 "compare_and_write": false, 00:05:35.650 "abort": true, 00:05:35.650 "seek_hole": false, 00:05:35.650 "seek_data": false, 00:05:35.650 "copy": true, 00:05:35.650 "nvme_iov_md": false 00:05:35.650 }, 00:05:35.650 "memory_domains": [ 00:05:35.650 { 00:05:35.650 "dma_device_id": "system", 00:05:35.650 "dma_device_type": 1 00:05:35.650 }, 00:05:35.650 { 00:05:35.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.650 "dma_device_type": 2 00:05:35.650 } 00:05:35.650 ], 00:05:35.650 "driver_specific": { 00:05:35.650 "passthru": { 00:05:35.650 "name": "Passthru0", 00:05:35.650 "base_bdev_name": "Malloc2" 00:05:35.650 } 00:05:35.650 } 00:05:35.650 } 00:05:35.650 ]' 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:35.650 00:05:35.650 real 0m0.288s 00:05:35.650 user 0m0.181s 00:05:35.650 sys 0m0.045s 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.650 15:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.650 ************************************ 00:05:35.650 END TEST rpc_daemon_integrity 00:05:35.650 ************************************ 00:05:35.650 15:05:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:35.650 15:05:00 rpc -- rpc/rpc.sh@84 -- # killprocess 2351369 00:05:35.650 15:05:00 rpc -- common/autotest_common.sh@954 -- # '[' -z 2351369 ']' 00:05:35.650 15:05:00 rpc -- common/autotest_common.sh@958 -- # kill -0 2351369 00:05:35.650 15:05:00 rpc -- common/autotest_common.sh@959 -- # uname 00:05:35.650 15:05:00 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.650 15:05:00 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2351369 00:05:35.650 15:05:00 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.650 15:05:00 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.650 15:05:00 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2351369' 00:05:35.650 killing process with pid 2351369 00:05:35.650 15:05:00 rpc -- common/autotest_common.sh@973 -- # kill 2351369 00:05:35.650 15:05:00 rpc -- common/autotest_common.sh@978 -- # wait 2351369 00:05:36.216 00:05:36.216 real 0m2.187s 00:05:36.216 user 0m2.763s 00:05:36.216 sys 0m0.819s 00:05:36.216 15:05:01 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.216 15:05:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.216 ************************************ 00:05:36.216 END TEST rpc 00:05:36.216 ************************************ 00:05:36.216 15:05:01 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:36.216 15:05:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.216 15:05:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.216 15:05:01 -- common/autotest_common.sh@10 -- # set +x 00:05:36.216 ************************************ 00:05:36.216 START TEST skip_rpc 00:05:36.216 ************************************ 00:05:36.216 15:05:01 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:36.216 * Looking for test storage... 00:05:36.216 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:36.216 15:05:01 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:36.216 15:05:01 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:36.216 15:05:01 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:36.216 15:05:01 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.216 15:05:01 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:36.216 15:05:01 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.216 15:05:01 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:36.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.216 --rc genhtml_branch_coverage=1 00:05:36.216 --rc genhtml_function_coverage=1 00:05:36.216 --rc genhtml_legend=1 00:05:36.216 --rc geninfo_all_blocks=1 00:05:36.216 --rc geninfo_unexecuted_blocks=1 00:05:36.216 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:36.216 ' 00:05:36.216 15:05:01 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:36.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.216 --rc genhtml_branch_coverage=1 00:05:36.216 --rc genhtml_function_coverage=1 00:05:36.216 --rc genhtml_legend=1 00:05:36.216 --rc geninfo_all_blocks=1 00:05:36.216 --rc geninfo_unexecuted_blocks=1 00:05:36.216 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:36.216 ' 00:05:36.216 15:05:01 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:36.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.216 --rc genhtml_branch_coverage=1 00:05:36.216 --rc genhtml_function_coverage=1 00:05:36.216 --rc genhtml_legend=1 00:05:36.216 --rc geninfo_all_blocks=1 00:05:36.216 --rc geninfo_unexecuted_blocks=1 00:05:36.216 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:36.216 ' 00:05:36.216 15:05:01 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:36.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.216 --rc genhtml_branch_coverage=1 00:05:36.216 --rc genhtml_function_coverage=1 00:05:36.216 --rc genhtml_legend=1 00:05:36.216 --rc geninfo_all_blocks=1 00:05:36.216 --rc geninfo_unexecuted_blocks=1 00:05:36.216 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:36.216 ' 00:05:36.216 15:05:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:36.216 15:05:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:36.216 15:05:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:36.216 15:05:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.216 15:05:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.216 15:05:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.475 ************************************ 00:05:36.475 START TEST skip_rpc 00:05:36.475 ************************************ 00:05:36.475 15:05:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:36.475 15:05:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:36.475 15:05:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2351845 00:05:36.475 15:05:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.475 15:05:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:36.475 [2024-11-27 15:05:01.571906] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:05:36.475 [2024-11-27 15:05:01.571959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2351845 ] 00:05:36.475 [2024-11-27 15:05:01.635376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.475 [2024-11-27 15:05:01.675589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2351845 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2351845 ']' 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2351845 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2351845 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2351845' 00:05:41.741 killing process with pid 2351845 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2351845 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2351845 00:05:41.741 00:05:41.741 real 0m5.374s 00:05:41.741 user 0m5.156s 00:05:41.741 sys 0m0.270s 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.741 15:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.741 ************************************ 00:05:41.741 END TEST skip_rpc 00:05:41.741 ************************************ 00:05:41.741 15:05:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:41.741 15:05:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.741 15:05:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.741 15:05:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.741 ************************************ 00:05:41.741 START TEST skip_rpc_with_json 00:05:41.741 ************************************ 00:05:41.741 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:41.741 15:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:41.741 15:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.741 15:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2352931 00:05:41.742 15:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.742 15:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2352931 00:05:41.742 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2352931 ']' 00:05:41.742 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.742 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.742 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.742 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.742 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.742 [2024-11-27 15:05:07.021098] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:05:41.742 [2024-11-27 15:05:07.021152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2352931 ] 00:05:42.000 [2024-11-27 15:05:07.087309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.000 [2024-11-27 15:05:07.126562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.257 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.257 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:42.257 15:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:42.257 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.257 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.258 [2024-11-27 15:05:07.347534] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:42.258 request: 00:05:42.258 { 00:05:42.258 "trtype": "tcp", 00:05:42.258 "method": "nvmf_get_transports", 00:05:42.258 "req_id": 1 00:05:42.258 } 00:05:42.258 Got JSON-RPC error response 00:05:42.258 response: 00:05:42.258 { 00:05:42.258 "code": -19, 00:05:42.258 "message": "No such device" 00:05:42.258 } 00:05:42.258 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:42.258 15:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:42.258 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.258 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.258 [2024-11-27 15:05:07.359636] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.258 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.258 15:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:42.258 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.258 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.258 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.258 15:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:42.258 { 00:05:42.258 "subsystems": [ 00:05:42.258 { 00:05:42.258 "subsystem": "scheduler", 00:05:42.258 "config": [ 00:05:42.258 { 00:05:42.258 "method": "framework_set_scheduler", 00:05:42.258 "params": { 00:05:42.258 "name": "static" 00:05:42.258 } 00:05:42.258 } 00:05:42.258 ] 00:05:42.258 }, 00:05:42.258 { 00:05:42.258 "subsystem": "vmd", 00:05:42.258 "config": [] 00:05:42.258 }, 00:05:42.258 { 00:05:42.258 "subsystem": "sock", 00:05:42.258 "config": [ 00:05:42.258 { 00:05:42.258 "method": "sock_set_default_impl", 00:05:42.258 "params": { 00:05:42.258 "impl_name": "posix" 00:05:42.258 } 00:05:42.258 }, 00:05:42.258 { 00:05:42.258 "method": "sock_impl_set_options", 00:05:42.258 "params": { 00:05:42.258 "impl_name": "ssl", 00:05:42.258 "recv_buf_size": 4096, 00:05:42.258 "send_buf_size": 4096, 00:05:42.258 "enable_recv_pipe": true, 00:05:42.258 "enable_quickack": false, 00:05:42.258 "enable_placement_id": 0, 00:05:42.258 "enable_zerocopy_send_server": true, 00:05:42.258 "enable_zerocopy_send_client": false, 00:05:42.258 "zerocopy_threshold": 0, 00:05:42.258 "tls_version": 0, 00:05:42.258 "enable_ktls": false 00:05:42.258 } 00:05:42.258 }, 00:05:42.258 { 00:05:42.258 "method": "sock_impl_set_options", 00:05:42.258 "params": { 00:05:42.258 "impl_name": "posix", 00:05:42.258 "recv_buf_size": 2097152, 00:05:42.258 "send_buf_size": 2097152, 00:05:42.258 "enable_recv_pipe": true, 00:05:42.258 "enable_quickack": false, 00:05:42.258 "enable_placement_id": 0, 00:05:42.258 "enable_zerocopy_send_server": true, 00:05:42.258 "enable_zerocopy_send_client": false, 00:05:42.258 "zerocopy_threshold": 0, 00:05:42.258 "tls_version": 0, 00:05:42.258 "enable_ktls": false 00:05:42.258 } 00:05:42.258 } 00:05:42.258 ] 00:05:42.258 }, 00:05:42.258 { 00:05:42.258 "subsystem": "iobuf", 00:05:42.258 "config": [ 00:05:42.258 { 00:05:42.258 "method": "iobuf_set_options", 00:05:42.258 "params": { 00:05:42.258 "small_pool_count": 8192, 00:05:42.258 "large_pool_count": 1024, 00:05:42.258 "small_bufsize": 8192, 00:05:42.258 "large_bufsize": 135168, 00:05:42.258 "enable_numa": false 00:05:42.258 } 00:05:42.258 } 00:05:42.258 ] 00:05:42.258 }, 00:05:42.258 { 00:05:42.258 "subsystem": "keyring", 00:05:42.258 "config": [] 00:05:42.258 }, 00:05:42.258 { 00:05:42.258 "subsystem": "vfio_user_target", 00:05:42.258 "config": null 00:05:42.258 }, 00:05:42.258 { 00:05:42.258 "subsystem": "fsdev", 00:05:42.258 "config": [ 00:05:42.258 { 00:05:42.258 "method": "fsdev_set_opts", 00:05:42.258 "params": { 00:05:42.258 "fsdev_io_pool_size": 65535, 00:05:42.258 "fsdev_io_cache_size": 256 00:05:42.258 } 00:05:42.258 } 00:05:42.258 ] 00:05:42.258 }, 00:05:42.258 { 00:05:42.258 "subsystem": "accel", 00:05:42.258 "config": [ 00:05:42.258 { 00:05:42.258 "method": "accel_set_options", 00:05:42.258 "params": { 00:05:42.258 "small_cache_size": 128, 00:05:42.258 "large_cache_size": 16, 00:05:42.258 "task_count": 2048, 00:05:42.258 "sequence_count": 2048, 00:05:42.258 "buf_count": 2048 00:05:42.258 } 00:05:42.258 } 00:05:42.258 ] 00:05:42.258 }, 00:05:42.258 { 00:05:42.258 "subsystem": "bdev", 00:05:42.258 "config": [ 00:05:42.258 { 00:05:42.258 "method": "bdev_set_options", 00:05:42.258 "params": { 00:05:42.258 "bdev_io_pool_size": 65535, 00:05:42.258 "bdev_io_cache_size": 256, 00:05:42.258 "bdev_auto_examine": true, 00:05:42.258 "iobuf_small_cache_size": 128, 00:05:42.258 "iobuf_large_cache_size": 16 00:05:42.258 } 00:05:42.258 }, 00:05:42.258 { 00:05:42.258 "method": "bdev_raid_set_options", 00:05:42.258 "params": { 00:05:42.258 "process_window_size_kb": 1024, 00:05:42.258 "process_max_bandwidth_mb_sec": 0 00:05:42.258 } 00:05:42.258 }, 00:05:42.258 { 00:05:42.258 "method": "bdev_nvme_set_options", 00:05:42.258 "params": { 00:05:42.258 "action_on_timeout": "none", 00:05:42.258 "timeout_us": 0, 00:05:42.258 "timeout_admin_us": 0, 00:05:42.258 "keep_alive_timeout_ms": 10000, 00:05:42.258 "arbitration_burst": 0, 00:05:42.258 "low_priority_weight": 0, 00:05:42.258 "medium_priority_weight": 0, 00:05:42.258 "high_priority_weight": 0, 00:05:42.258 "nvme_adminq_poll_period_us": 10000, 00:05:42.258 "nvme_ioq_poll_period_us": 0, 00:05:42.258 "io_queue_requests": 0, 00:05:42.258 "delay_cmd_submit": true, 00:05:42.258 "transport_retry_count": 4, 00:05:42.258 "bdev_retry_count": 3, 00:05:42.258 "transport_ack_timeout": 0, 00:05:42.258 "ctrlr_loss_timeout_sec": 0, 00:05:42.258 "reconnect_delay_sec": 0, 00:05:42.258 "fast_io_fail_timeout_sec": 0, 00:05:42.258 "disable_auto_failback": false, 00:05:42.258 "generate_uuids": false, 00:05:42.258 "transport_tos": 0, 00:05:42.258 "nvme_error_stat": false, 00:05:42.258 "rdma_srq_size": 0, 00:05:42.258 "io_path_stat": false, 00:05:42.258 "allow_accel_sequence": false, 00:05:42.258 "rdma_max_cq_size": 0, 00:05:42.258 "rdma_cm_event_timeout_ms": 0, 00:05:42.258 "dhchap_digests": [ 00:05:42.258 "sha256", 00:05:42.258 "sha384", 00:05:42.258 "sha512" 00:05:42.258 ], 00:05:42.258 "dhchap_dhgroups": [ 00:05:42.258 "null", 00:05:42.258 "ffdhe2048", 00:05:42.258 "ffdhe3072", 00:05:42.258 "ffdhe4096", 00:05:42.258 "ffdhe6144", 00:05:42.258 "ffdhe8192" 00:05:42.258 ] 00:05:42.258 } 00:05:42.258 }, 00:05:42.258 { 00:05:42.258 "method": "bdev_nvme_set_hotplug", 00:05:42.258 "params": { 00:05:42.258 "period_us": 100000, 00:05:42.258 "enable": false 00:05:42.258 } 00:05:42.258 }, 00:05:42.258 { 00:05:42.258 "method": "bdev_iscsi_set_options", 00:05:42.258 "params": { 00:05:42.258 "timeout_sec": 30 00:05:42.258 } 00:05:42.258 }, 00:05:42.258 { 00:05:42.258 "method": "bdev_wait_for_examine" 00:05:42.258 } 00:05:42.258 ] 00:05:42.258 }, 00:05:42.258 { 00:05:42.258 "subsystem": "nvmf", 00:05:42.258 "config": [ 00:05:42.258 { 00:05:42.258 "method": "nvmf_set_config", 00:05:42.258 "params": { 00:05:42.258 "discovery_filter": "match_any", 00:05:42.258 "admin_cmd_passthru": { 00:05:42.258 "identify_ctrlr": false 00:05:42.258 }, 00:05:42.258 "dhchap_digests": [ 00:05:42.258 "sha256", 00:05:42.258 "sha384", 00:05:42.258 "sha512" 00:05:42.258 ], 00:05:42.258 "dhchap_dhgroups": [ 00:05:42.258 "null", 00:05:42.258 "ffdhe2048", 00:05:42.258 "ffdhe3072", 00:05:42.258 "ffdhe4096", 00:05:42.258 "ffdhe6144", 00:05:42.258 "ffdhe8192" 00:05:42.258 ] 00:05:42.258 } 00:05:42.258 }, 00:05:42.258 { 00:05:42.258 "method": "nvmf_set_max_subsystems", 00:05:42.258 "params": { 00:05:42.258 "max_subsystems": 1024 00:05:42.258 } 00:05:42.258 }, 00:05:42.258 { 00:05:42.258 "method": "nvmf_set_crdt", 00:05:42.258 "params": { 00:05:42.258 "crdt1": 0, 00:05:42.258 "crdt2": 0, 00:05:42.258 "crdt3": 0 00:05:42.258 } 00:05:42.258 }, 00:05:42.258 { 00:05:42.258 "method": "nvmf_create_transport", 00:05:42.258 "params": { 00:05:42.258 "trtype": "TCP", 00:05:42.258 "max_queue_depth": 128, 00:05:42.258 "max_io_qpairs_per_ctrlr": 127, 00:05:42.258 "in_capsule_data_size": 4096, 00:05:42.258 "max_io_size": 131072, 00:05:42.259 "io_unit_size": 131072, 00:05:42.259 "max_aq_depth": 128, 00:05:42.259 "num_shared_buffers": 511, 00:05:42.259 "buf_cache_size": 4294967295, 00:05:42.259 "dif_insert_or_strip": false, 00:05:42.259 "zcopy": false, 00:05:42.259 "c2h_success": true, 00:05:42.259 "sock_priority": 0, 00:05:42.259 "abort_timeout_sec": 1, 00:05:42.259 "ack_timeout": 0, 00:05:42.259 "data_wr_pool_size": 0 00:05:42.259 } 00:05:42.259 } 00:05:42.259 ] 00:05:42.259 }, 00:05:42.259 { 00:05:42.259 "subsystem": "nbd", 00:05:42.259 "config": [] 00:05:42.259 }, 00:05:42.259 { 00:05:42.259 "subsystem": "ublk", 00:05:42.259 "config": [] 00:05:42.259 }, 00:05:42.259 { 00:05:42.259 "subsystem": "vhost_blk", 00:05:42.259 "config": [] 00:05:42.259 }, 00:05:42.259 { 00:05:42.259 "subsystem": "scsi", 00:05:42.259 "config": null 00:05:42.259 }, 00:05:42.259 { 00:05:42.259 "subsystem": "iscsi", 00:05:42.259 "config": [ 00:05:42.259 { 00:05:42.259 "method": "iscsi_set_options", 00:05:42.259 "params": { 00:05:42.259 "node_base": "iqn.2016-06.io.spdk", 00:05:42.259 "max_sessions": 128, 00:05:42.259 "max_connections_per_session": 2, 00:05:42.259 "max_queue_depth": 64, 00:05:42.259 "default_time2wait": 2, 00:05:42.259 "default_time2retain": 20, 00:05:42.259 "first_burst_length": 8192, 00:05:42.259 "immediate_data": true, 00:05:42.259 "allow_duplicated_isid": false, 00:05:42.259 "error_recovery_level": 0, 00:05:42.259 "nop_timeout": 60, 00:05:42.259 "nop_in_interval": 30, 00:05:42.259 "disable_chap": false, 00:05:42.259 "require_chap": false, 00:05:42.259 "mutual_chap": false, 00:05:42.259 "chap_group": 0, 00:05:42.259 "max_large_datain_per_connection": 64, 00:05:42.259 "max_r2t_per_connection": 4, 00:05:42.259 "pdu_pool_size": 36864, 00:05:42.259 "immediate_data_pool_size": 16384, 00:05:42.259 "data_out_pool_size": 2048 00:05:42.259 } 00:05:42.259 } 00:05:42.259 ] 00:05:42.259 }, 00:05:42.259 { 00:05:42.259 "subsystem": "vhost_scsi", 00:05:42.259 "config": [] 00:05:42.259 } 00:05:42.259 ] 00:05:42.259 } 00:05:42.259 15:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:42.259 15:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2352931 00:05:42.259 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2352931 ']' 00:05:42.259 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2352931 00:05:42.259 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:42.259 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.259 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2352931 00:05:42.516 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.516 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.516 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2352931' 00:05:42.516 killing process with pid 2352931 00:05:42.516 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2352931 00:05:42.516 15:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2352931 00:05:42.801 15:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2353062 00:05:42.801 15:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:42.801 15:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:48.203 15:05:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2353062 00:05:48.203 15:05:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2353062 ']' 00:05:48.203 15:05:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2353062 00:05:48.203 15:05:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:48.203 15:05:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.204 15:05:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2353062 00:05:48.204 15:05:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.204 15:05:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.204 15:05:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2353062' 00:05:48.204 killing process with pid 2353062 00:05:48.204 15:05:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2353062 00:05:48.204 15:05:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2353062 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:48.204 00:05:48.204 real 0m6.273s 00:05:48.204 user 0m5.986s 00:05:48.204 sys 0m0.632s 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.204 ************************************ 00:05:48.204 END TEST skip_rpc_with_json 00:05:48.204 ************************************ 00:05:48.204 15:05:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:48.204 15:05:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.204 15:05:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.204 15:05:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.204 ************************************ 00:05:48.204 START TEST skip_rpc_with_delay 00:05:48.204 ************************************ 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.204 [2024-11-27 15:05:13.379982] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.204 00:05:48.204 real 0m0.043s 00:05:48.204 user 0m0.021s 00:05:48.204 sys 0m0.022s 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.204 15:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:48.204 ************************************ 00:05:48.204 END TEST skip_rpc_with_delay 00:05:48.204 ************************************ 00:05:48.204 15:05:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:48.204 15:05:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:48.204 15:05:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:48.204 15:05:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.204 15:05:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.204 15:05:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.204 ************************************ 00:05:48.204 START TEST exit_on_failed_rpc_init 00:05:48.204 ************************************ 00:05:48.204 15:05:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:48.204 15:05:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2354070 00:05:48.204 15:05:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2354070 00:05:48.204 15:05:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.204 15:05:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2354070 ']' 00:05:48.204 15:05:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.204 15:05:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.204 15:05:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.204 15:05:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.204 15:05:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:48.204 [2024-11-27 15:05:13.500455] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:05:48.204 [2024-11-27 15:05:13.500537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2354070 ] 00:05:48.462 [2024-11-27 15:05:13.572815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.462 [2024-11-27 15:05:13.614878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.721 15:05:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.721 15:05:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:48.721 15:05:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.721 15:05:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.721 15:05:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:48.721 15:05:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.721 15:05:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.721 15:05:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.721 15:05:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.721 15:05:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.721 15:05:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.721 15:05:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.721 15:05:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.721 15:05:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:48.721 15:05:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.721 [2024-11-27 15:05:13.855772] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:05:48.721 [2024-11-27 15:05:13.855835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2354081 ] 00:05:48.721 [2024-11-27 15:05:13.926693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.721 [2024-11-27 15:05:13.966764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.721 [2024-11-27 15:05:13.966836] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:48.721 [2024-11-27 15:05:13.966847] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:48.721 [2024-11-27 15:05:13.966855] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:48.721 15:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:48.721 15:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.721 15:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:48.721 15:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:48.721 15:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:48.721 15:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.721 15:05:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:48.721 15:05:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2354070 00:05:48.721 15:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2354070 ']' 00:05:48.721 15:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2354070 00:05:48.721 15:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:48.721 15:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.721 15:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2354070 00:05:48.979 15:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.979 15:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.979 15:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2354070' 00:05:48.979 killing process with pid 2354070 00:05:48.979 15:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2354070 00:05:48.979 15:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2354070 00:05:49.238 00:05:49.238 real 0m0.889s 00:05:49.238 user 0m0.917s 00:05:49.238 sys 0m0.383s 00:05:49.238 15:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.238 15:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:49.238 ************************************ 00:05:49.238 END TEST exit_on_failed_rpc_init 00:05:49.238 ************************************ 00:05:49.238 15:05:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:49.238 00:05:49.238 real 0m13.076s 00:05:49.238 user 0m12.301s 00:05:49.238 sys 0m1.622s 00:05:49.238 15:05:14 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.238 15:05:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.238 ************************************ 00:05:49.238 END TEST skip_rpc 00:05:49.238 ************************************ 00:05:49.238 15:05:14 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:49.238 15:05:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.238 15:05:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.238 15:05:14 -- common/autotest_common.sh@10 -- # set +x 00:05:49.238 ************************************ 00:05:49.238 START TEST rpc_client 00:05:49.238 ************************************ 00:05:49.238 15:05:14 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:49.498 * Looking for test storage... 00:05:49.498 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:49.498 15:05:14 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.498 15:05:14 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.498 15:05:14 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.498 15:05:14 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.498 15:05:14 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:49.498 15:05:14 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.498 15:05:14 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.498 --rc genhtml_branch_coverage=1 00:05:49.498 --rc genhtml_function_coverage=1 00:05:49.498 --rc genhtml_legend=1 00:05:49.498 --rc geninfo_all_blocks=1 00:05:49.498 --rc geninfo_unexecuted_blocks=1 00:05:49.498 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.498 ' 00:05:49.498 15:05:14 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.498 --rc genhtml_branch_coverage=1 00:05:49.498 --rc genhtml_function_coverage=1 00:05:49.498 --rc genhtml_legend=1 00:05:49.498 --rc geninfo_all_blocks=1 00:05:49.498 --rc geninfo_unexecuted_blocks=1 00:05:49.498 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.498 ' 00:05:49.498 15:05:14 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.498 --rc genhtml_branch_coverage=1 00:05:49.498 --rc genhtml_function_coverage=1 00:05:49.498 --rc genhtml_legend=1 00:05:49.498 --rc geninfo_all_blocks=1 00:05:49.498 --rc geninfo_unexecuted_blocks=1 00:05:49.498 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.498 ' 00:05:49.498 15:05:14 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.498 --rc genhtml_branch_coverage=1 00:05:49.498 --rc genhtml_function_coverage=1 00:05:49.498 --rc genhtml_legend=1 00:05:49.498 --rc geninfo_all_blocks=1 00:05:49.498 --rc geninfo_unexecuted_blocks=1 00:05:49.498 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.498 ' 00:05:49.498 15:05:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:49.498 OK 00:05:49.498 15:05:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:49.498 00:05:49.498 real 0m0.208s 00:05:49.498 user 0m0.115s 00:05:49.498 sys 0m0.111s 00:05:49.498 15:05:14 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.498 15:05:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:49.498 ************************************ 00:05:49.498 END TEST rpc_client 00:05:49.498 ************************************ 00:05:49.498 15:05:14 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:49.498 15:05:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.498 15:05:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.498 15:05:14 -- common/autotest_common.sh@10 -- # set +x 00:05:49.498 ************************************ 00:05:49.498 START TEST json_config 00:05:49.498 ************************************ 00:05:49.498 15:05:14 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:49.758 15:05:14 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.758 15:05:14 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.758 15:05:14 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.758 15:05:14 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.758 15:05:14 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.758 15:05:14 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.758 15:05:14 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.758 15:05:14 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.758 15:05:14 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.758 15:05:14 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.758 15:05:14 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.758 15:05:14 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.758 15:05:14 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.758 15:05:14 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.758 15:05:14 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.758 15:05:14 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:49.758 15:05:14 json_config -- scripts/common.sh@345 -- # : 1 00:05:49.758 15:05:14 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.758 15:05:14 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.758 15:05:14 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:49.758 15:05:14 json_config -- scripts/common.sh@353 -- # local d=1 00:05:49.759 15:05:14 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.759 15:05:14 json_config -- scripts/common.sh@355 -- # echo 1 00:05:49.759 15:05:14 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.759 15:05:14 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:49.759 15:05:14 json_config -- scripts/common.sh@353 -- # local d=2 00:05:49.759 15:05:14 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.759 15:05:14 json_config -- scripts/common.sh@355 -- # echo 2 00:05:49.759 15:05:14 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.759 15:05:14 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.759 15:05:14 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.759 15:05:14 json_config -- scripts/common.sh@368 -- # return 0 00:05:49.759 15:05:14 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.759 15:05:14 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.759 --rc genhtml_branch_coverage=1 00:05:49.759 --rc genhtml_function_coverage=1 00:05:49.759 --rc genhtml_legend=1 00:05:49.759 --rc geninfo_all_blocks=1 00:05:49.759 --rc geninfo_unexecuted_blocks=1 00:05:49.759 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.759 ' 00:05:49.759 15:05:14 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.759 --rc genhtml_branch_coverage=1 00:05:49.759 --rc genhtml_function_coverage=1 00:05:49.759 --rc genhtml_legend=1 00:05:49.759 --rc geninfo_all_blocks=1 00:05:49.759 --rc geninfo_unexecuted_blocks=1 00:05:49.759 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.759 ' 00:05:49.759 15:05:14 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.759 --rc genhtml_branch_coverage=1 00:05:49.759 --rc genhtml_function_coverage=1 00:05:49.759 --rc genhtml_legend=1 00:05:49.759 --rc geninfo_all_blocks=1 00:05:49.759 --rc geninfo_unexecuted_blocks=1 00:05:49.759 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.759 ' 00:05:49.759 15:05:14 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.759 --rc genhtml_branch_coverage=1 00:05:49.759 --rc genhtml_function_coverage=1 00:05:49.759 --rc genhtml_legend=1 00:05:49.759 --rc geninfo_all_blocks=1 00:05:49.759 --rc geninfo_unexecuted_blocks=1 00:05:49.759 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.759 ' 00:05:49.759 15:05:14 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:49.759 15:05:14 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:49.759 15:05:14 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.759 15:05:14 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.759 15:05:14 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.759 15:05:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.759 15:05:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.759 15:05:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.759 15:05:14 json_config -- paths/export.sh@5 -- # export PATH 00:05:49.759 15:05:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@51 -- # : 0 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:49.759 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:49.759 15:05:14 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:49.759 15:05:14 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:49.759 15:05:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:49.759 15:05:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:49.759 15:05:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:49.759 15:05:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:49.759 15:05:14 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:49.759 WARNING: No tests are enabled so not running JSON configuration tests 00:05:49.759 15:05:14 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:49.759 00:05:49.759 real 0m0.200s 00:05:49.759 user 0m0.113s 00:05:49.759 sys 0m0.096s 00:05:49.759 15:05:14 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.759 15:05:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.759 ************************************ 00:05:49.759 END TEST json_config 00:05:49.759 ************************************ 00:05:49.759 15:05:15 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:49.759 15:05:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.759 15:05:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.759 15:05:15 -- common/autotest_common.sh@10 -- # set +x 00:05:49.759 ************************************ 00:05:49.759 START TEST json_config_extra_key 00:05:49.759 ************************************ 00:05:49.759 15:05:15 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:50.019 15:05:15 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:50.019 15:05:15 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:50.019 15:05:15 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.019 15:05:15 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:50.019 15:05:15 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.019 15:05:15 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.019 --rc genhtml_branch_coverage=1 00:05:50.019 --rc genhtml_function_coverage=1 00:05:50.019 --rc genhtml_legend=1 00:05:50.019 --rc geninfo_all_blocks=1 00:05:50.019 --rc geninfo_unexecuted_blocks=1 00:05:50.019 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:50.019 ' 00:05:50.019 15:05:15 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.019 --rc genhtml_branch_coverage=1 00:05:50.019 --rc genhtml_function_coverage=1 00:05:50.019 --rc genhtml_legend=1 00:05:50.019 --rc geninfo_all_blocks=1 00:05:50.019 --rc geninfo_unexecuted_blocks=1 00:05:50.019 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:50.019 ' 00:05:50.019 15:05:15 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.019 --rc genhtml_branch_coverage=1 00:05:50.019 --rc genhtml_function_coverage=1 00:05:50.019 --rc genhtml_legend=1 00:05:50.019 --rc geninfo_all_blocks=1 00:05:50.019 --rc geninfo_unexecuted_blocks=1 00:05:50.019 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:50.019 ' 00:05:50.019 15:05:15 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.019 --rc genhtml_branch_coverage=1 00:05:50.019 --rc genhtml_function_coverage=1 00:05:50.019 --rc genhtml_legend=1 00:05:50.019 --rc geninfo_all_blocks=1 00:05:50.019 --rc geninfo_unexecuted_blocks=1 00:05:50.019 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:50.019 ' 00:05:50.019 15:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.019 15:05:15 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.019 15:05:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.019 15:05:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.019 15:05:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.019 15:05:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:50.019 15:05:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:50.019 15:05:15 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:50.020 15:05:15 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:50.020 15:05:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.020 15:05:15 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.020 15:05:15 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:50.020 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:50.020 15:05:15 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:50.020 15:05:15 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:50.020 15:05:15 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:50.020 15:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:50.020 15:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:50.020 15:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:50.020 15:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:50.020 15:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:50.020 15:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:50.020 15:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:50.020 15:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:50.020 15:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:50.020 15:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:50.020 15:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:50.020 INFO: launching applications... 00:05:50.020 15:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:50.020 15:05:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:50.020 15:05:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:50.020 15:05:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:50.020 15:05:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:50.020 15:05:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:50.020 15:05:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.020 15:05:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.020 15:05:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2354513 00:05:50.020 15:05:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:50.020 Waiting for target to run... 00:05:50.020 15:05:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2354513 /var/tmp/spdk_tgt.sock 00:05:50.020 15:05:15 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2354513 ']' 00:05:50.020 15:05:15 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:50.020 15:05:15 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.020 15:05:15 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:50.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:50.020 15:05:15 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:50.020 15:05:15 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.020 15:05:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:50.020 [2024-11-27 15:05:15.267422] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:05:50.020 [2024-11-27 15:05:15.267490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2354513 ] 00:05:50.586 [2024-11-27 15:05:15.699218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.586 [2024-11-27 15:05:15.749575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.844 15:05:16 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.844 15:05:16 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:50.844 15:05:16 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:50.844 00:05:50.844 15:05:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:50.844 INFO: shutting down applications... 00:05:50.844 15:05:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:50.844 15:05:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:50.844 15:05:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:50.844 15:05:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2354513 ]] 00:05:50.844 15:05:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2354513 00:05:50.844 15:05:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:50.844 15:05:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.844 15:05:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2354513 00:05:50.844 15:05:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.410 15:05:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.410 15:05:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.410 15:05:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2354513 00:05:51.410 15:05:16 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:51.410 15:05:16 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:51.410 15:05:16 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:51.411 15:05:16 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:51.411 SPDK target shutdown done 00:05:51.411 15:05:16 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:51.411 Success 00:05:51.411 00:05:51.411 real 0m1.561s 00:05:51.411 user 0m1.166s 00:05:51.411 sys 0m0.552s 00:05:51.411 15:05:16 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.411 15:05:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:51.411 ************************************ 00:05:51.411 END TEST json_config_extra_key 00:05:51.411 ************************************ 00:05:51.411 15:05:16 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:51.411 15:05:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.411 15:05:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.411 15:05:16 -- common/autotest_common.sh@10 -- # set +x 00:05:51.411 ************************************ 00:05:51.411 START TEST alias_rpc 00:05:51.411 ************************************ 00:05:51.411 15:05:16 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:51.668 * Looking for test storage... 00:05:51.668 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:51.668 15:05:16 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.668 15:05:16 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.668 15:05:16 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.668 15:05:16 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.668 15:05:16 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.669 15:05:16 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:51.669 15:05:16 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.669 15:05:16 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.669 --rc genhtml_branch_coverage=1 00:05:51.669 --rc genhtml_function_coverage=1 00:05:51.669 --rc genhtml_legend=1 00:05:51.669 --rc geninfo_all_blocks=1 00:05:51.669 --rc geninfo_unexecuted_blocks=1 00:05:51.669 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:51.669 ' 00:05:51.669 15:05:16 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.669 --rc genhtml_branch_coverage=1 00:05:51.669 --rc genhtml_function_coverage=1 00:05:51.669 --rc genhtml_legend=1 00:05:51.669 --rc geninfo_all_blocks=1 00:05:51.669 --rc geninfo_unexecuted_blocks=1 00:05:51.669 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:51.669 ' 00:05:51.669 15:05:16 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.669 --rc genhtml_branch_coverage=1 00:05:51.669 --rc genhtml_function_coverage=1 00:05:51.669 --rc genhtml_legend=1 00:05:51.669 --rc geninfo_all_blocks=1 00:05:51.669 --rc geninfo_unexecuted_blocks=1 00:05:51.669 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:51.669 ' 00:05:51.669 15:05:16 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.669 --rc genhtml_branch_coverage=1 00:05:51.669 --rc genhtml_function_coverage=1 00:05:51.669 --rc genhtml_legend=1 00:05:51.669 --rc geninfo_all_blocks=1 00:05:51.669 --rc geninfo_unexecuted_blocks=1 00:05:51.669 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:51.669 ' 00:05:51.669 15:05:16 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:51.669 15:05:16 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2354844 00:05:51.669 15:05:16 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.669 15:05:16 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2354844 00:05:51.669 15:05:16 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2354844 ']' 00:05:51.669 15:05:16 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.669 15:05:16 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.669 15:05:16 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.669 15:05:16 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.669 15:05:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.669 [2024-11-27 15:05:16.911942] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:05:51.669 [2024-11-27 15:05:16.912006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2354844 ] 00:05:51.669 [2024-11-27 15:05:16.983798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.926 [2024-11-27 15:05:17.027548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.926 15:05:17 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.926 15:05:17 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:51.926 15:05:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:52.184 15:05:17 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2354844 00:05:52.184 15:05:17 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2354844 ']' 00:05:52.184 15:05:17 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2354844 00:05:52.184 15:05:17 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:52.184 15:05:17 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.184 15:05:17 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2354844 00:05:52.441 15:05:17 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.441 15:05:17 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.441 15:05:17 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2354844' 00:05:52.441 killing process with pid 2354844 00:05:52.441 15:05:17 alias_rpc -- common/autotest_common.sh@973 -- # kill 2354844 00:05:52.441 15:05:17 alias_rpc -- common/autotest_common.sh@978 -- # wait 2354844 00:05:52.698 00:05:52.698 real 0m1.129s 00:05:52.698 user 0m1.130s 00:05:52.698 sys 0m0.435s 00:05:52.698 15:05:17 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.698 15:05:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.698 ************************************ 00:05:52.698 END TEST alias_rpc 00:05:52.698 ************************************ 00:05:52.698 15:05:17 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:52.698 15:05:17 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:52.698 15:05:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.698 15:05:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.698 15:05:17 -- common/autotest_common.sh@10 -- # set +x 00:05:52.698 ************************************ 00:05:52.698 START TEST spdkcli_tcp 00:05:52.698 ************************************ 00:05:52.698 15:05:17 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:52.698 * Looking for test storage... 00:05:52.698 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:52.698 15:05:17 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:52.698 15:05:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:52.698 15:05:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:52.955 15:05:18 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.955 15:05:18 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:52.955 15:05:18 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.955 15:05:18 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:52.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.955 --rc genhtml_branch_coverage=1 00:05:52.955 --rc genhtml_function_coverage=1 00:05:52.955 --rc genhtml_legend=1 00:05:52.955 --rc geninfo_all_blocks=1 00:05:52.955 --rc geninfo_unexecuted_blocks=1 00:05:52.955 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:52.955 ' 00:05:52.955 15:05:18 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:52.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.955 --rc genhtml_branch_coverage=1 00:05:52.955 --rc genhtml_function_coverage=1 00:05:52.955 --rc genhtml_legend=1 00:05:52.955 --rc geninfo_all_blocks=1 00:05:52.955 --rc geninfo_unexecuted_blocks=1 00:05:52.955 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:52.955 ' 00:05:52.955 15:05:18 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:52.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.956 --rc genhtml_branch_coverage=1 00:05:52.956 --rc genhtml_function_coverage=1 00:05:52.956 --rc genhtml_legend=1 00:05:52.956 --rc geninfo_all_blocks=1 00:05:52.956 --rc geninfo_unexecuted_blocks=1 00:05:52.956 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:52.956 ' 00:05:52.956 15:05:18 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:52.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.956 --rc genhtml_branch_coverage=1 00:05:52.956 --rc genhtml_function_coverage=1 00:05:52.956 --rc genhtml_legend=1 00:05:52.956 --rc geninfo_all_blocks=1 00:05:52.956 --rc geninfo_unexecuted_blocks=1 00:05:52.956 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:52.956 ' 00:05:52.956 15:05:18 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:52.956 15:05:18 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:52.956 15:05:18 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:52.956 15:05:18 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:52.956 15:05:18 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:52.956 15:05:18 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:52.956 15:05:18 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:52.956 15:05:18 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:52.956 15:05:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:52.956 15:05:18 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2355163 00:05:52.956 15:05:18 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:52.956 15:05:18 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2355163 00:05:52.956 15:05:18 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2355163 ']' 00:05:52.956 15:05:18 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.956 15:05:18 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.956 15:05:18 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.956 15:05:18 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.956 15:05:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:52.956 [2024-11-27 15:05:18.117006] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:05:52.956 [2024-11-27 15:05:18.117068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2355163 ] 00:05:52.956 [2024-11-27 15:05:18.186796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.956 [2024-11-27 15:05:18.231093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.956 [2024-11-27 15:05:18.231096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.213 15:05:18 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.213 15:05:18 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:53.213 15:05:18 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2355186 00:05:53.213 15:05:18 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:53.213 15:05:18 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:53.472 [ 00:05:53.472 "spdk_get_version", 00:05:53.472 "rpc_get_methods", 00:05:53.472 "notify_get_notifications", 00:05:53.472 "notify_get_types", 00:05:53.472 "trace_get_info", 00:05:53.472 "trace_get_tpoint_group_mask", 00:05:53.472 "trace_disable_tpoint_group", 00:05:53.472 "trace_enable_tpoint_group", 00:05:53.472 "trace_clear_tpoint_mask", 00:05:53.472 "trace_set_tpoint_mask", 00:05:53.472 "fsdev_set_opts", 00:05:53.472 "fsdev_get_opts", 00:05:53.472 "framework_get_pci_devices", 00:05:53.472 "framework_get_config", 00:05:53.472 "framework_get_subsystems", 00:05:53.472 "vfu_tgt_set_base_path", 00:05:53.472 "keyring_get_keys", 00:05:53.472 "iobuf_get_stats", 00:05:53.472 "iobuf_set_options", 00:05:53.472 "sock_get_default_impl", 00:05:53.472 "sock_set_default_impl", 00:05:53.472 "sock_impl_set_options", 00:05:53.472 "sock_impl_get_options", 00:05:53.472 "vmd_rescan", 00:05:53.472 "vmd_remove_device", 00:05:53.472 "vmd_enable", 00:05:53.472 "accel_get_stats", 00:05:53.472 "accel_set_options", 00:05:53.472 "accel_set_driver", 00:05:53.472 "accel_crypto_key_destroy", 00:05:53.472 "accel_crypto_keys_get", 00:05:53.472 "accel_crypto_key_create", 00:05:53.472 "accel_assign_opc", 00:05:53.472 "accel_get_module_info", 00:05:53.472 "accel_get_opc_assignments", 00:05:53.472 "bdev_get_histogram", 00:05:53.472 "bdev_enable_histogram", 00:05:53.472 "bdev_set_qos_limit", 00:05:53.472 "bdev_set_qd_sampling_period", 00:05:53.472 "bdev_get_bdevs", 00:05:53.472 "bdev_reset_iostat", 00:05:53.472 "bdev_get_iostat", 00:05:53.472 "bdev_examine", 00:05:53.472 "bdev_wait_for_examine", 00:05:53.472 "bdev_set_options", 00:05:53.472 "scsi_get_devices", 00:05:53.472 "thread_set_cpumask", 00:05:53.472 "scheduler_set_options", 00:05:53.472 "framework_get_governor", 00:05:53.472 "framework_get_scheduler", 00:05:53.472 "framework_set_scheduler", 00:05:53.472 "framework_get_reactors", 00:05:53.472 "thread_get_io_channels", 00:05:53.472 "thread_get_pollers", 00:05:53.472 "thread_get_stats", 00:05:53.472 "framework_monitor_context_switch", 00:05:53.472 "spdk_kill_instance", 00:05:53.472 "log_enable_timestamps", 00:05:53.472 "log_get_flags", 00:05:53.472 "log_clear_flag", 00:05:53.472 "log_set_flag", 00:05:53.472 "log_get_level", 00:05:53.472 "log_set_level", 00:05:53.472 "log_get_print_level", 00:05:53.472 "log_set_print_level", 00:05:53.472 "framework_enable_cpumask_locks", 00:05:53.472 "framework_disable_cpumask_locks", 00:05:53.472 "framework_wait_init", 00:05:53.472 "framework_start_init", 00:05:53.472 "virtio_blk_create_transport", 00:05:53.472 "virtio_blk_get_transports", 00:05:53.472 "vhost_controller_set_coalescing", 00:05:53.472 "vhost_get_controllers", 00:05:53.472 "vhost_delete_controller", 00:05:53.472 "vhost_create_blk_controller", 00:05:53.472 "vhost_scsi_controller_remove_target", 00:05:53.472 "vhost_scsi_controller_add_target", 00:05:53.472 "vhost_start_scsi_controller", 00:05:53.472 "vhost_create_scsi_controller", 00:05:53.472 "ublk_recover_disk", 00:05:53.472 "ublk_get_disks", 00:05:53.472 "ublk_stop_disk", 00:05:53.472 "ublk_start_disk", 00:05:53.472 "ublk_destroy_target", 00:05:53.472 "ublk_create_target", 00:05:53.472 "nbd_get_disks", 00:05:53.472 "nbd_stop_disk", 00:05:53.472 "nbd_start_disk", 00:05:53.472 "env_dpdk_get_mem_stats", 00:05:53.472 "nvmf_stop_mdns_prr", 00:05:53.472 "nvmf_publish_mdns_prr", 00:05:53.472 "nvmf_subsystem_get_listeners", 00:05:53.472 "nvmf_subsystem_get_qpairs", 00:05:53.472 "nvmf_subsystem_get_controllers", 00:05:53.472 "nvmf_get_stats", 00:05:53.472 "nvmf_get_transports", 00:05:53.472 "nvmf_create_transport", 00:05:53.472 "nvmf_get_targets", 00:05:53.472 "nvmf_delete_target", 00:05:53.472 "nvmf_create_target", 00:05:53.472 "nvmf_subsystem_allow_any_host", 00:05:53.472 "nvmf_subsystem_set_keys", 00:05:53.472 "nvmf_subsystem_remove_host", 00:05:53.472 "nvmf_subsystem_add_host", 00:05:53.472 "nvmf_ns_remove_host", 00:05:53.472 "nvmf_ns_add_host", 00:05:53.472 "nvmf_subsystem_remove_ns", 00:05:53.472 "nvmf_subsystem_set_ns_ana_group", 00:05:53.472 "nvmf_subsystem_add_ns", 00:05:53.472 "nvmf_subsystem_listener_set_ana_state", 00:05:53.472 "nvmf_discovery_get_referrals", 00:05:53.472 "nvmf_discovery_remove_referral", 00:05:53.472 "nvmf_discovery_add_referral", 00:05:53.472 "nvmf_subsystem_remove_listener", 00:05:53.472 "nvmf_subsystem_add_listener", 00:05:53.472 "nvmf_delete_subsystem", 00:05:53.472 "nvmf_create_subsystem", 00:05:53.472 "nvmf_get_subsystems", 00:05:53.472 "nvmf_set_crdt", 00:05:53.472 "nvmf_set_config", 00:05:53.472 "nvmf_set_max_subsystems", 00:05:53.472 "iscsi_get_histogram", 00:05:53.472 "iscsi_enable_histogram", 00:05:53.472 "iscsi_set_options", 00:05:53.472 "iscsi_get_auth_groups", 00:05:53.472 "iscsi_auth_group_remove_secret", 00:05:53.472 "iscsi_auth_group_add_secret", 00:05:53.472 "iscsi_delete_auth_group", 00:05:53.472 "iscsi_create_auth_group", 00:05:53.472 "iscsi_set_discovery_auth", 00:05:53.472 "iscsi_get_options", 00:05:53.472 "iscsi_target_node_request_logout", 00:05:53.472 "iscsi_target_node_set_redirect", 00:05:53.472 "iscsi_target_node_set_auth", 00:05:53.472 "iscsi_target_node_add_lun", 00:05:53.472 "iscsi_get_stats", 00:05:53.472 "iscsi_get_connections", 00:05:53.472 "iscsi_portal_group_set_auth", 00:05:53.472 "iscsi_start_portal_group", 00:05:53.472 "iscsi_delete_portal_group", 00:05:53.472 "iscsi_create_portal_group", 00:05:53.472 "iscsi_get_portal_groups", 00:05:53.472 "iscsi_delete_target_node", 00:05:53.472 "iscsi_target_node_remove_pg_ig_maps", 00:05:53.472 "iscsi_target_node_add_pg_ig_maps", 00:05:53.472 "iscsi_create_target_node", 00:05:53.472 "iscsi_get_target_nodes", 00:05:53.472 "iscsi_delete_initiator_group", 00:05:53.472 "iscsi_initiator_group_remove_initiators", 00:05:53.472 "iscsi_initiator_group_add_initiators", 00:05:53.472 "iscsi_create_initiator_group", 00:05:53.472 "iscsi_get_initiator_groups", 00:05:53.472 "fsdev_aio_delete", 00:05:53.472 "fsdev_aio_create", 00:05:53.472 "keyring_linux_set_options", 00:05:53.472 "keyring_file_remove_key", 00:05:53.472 "keyring_file_add_key", 00:05:53.472 "vfu_virtio_create_fs_endpoint", 00:05:53.472 "vfu_virtio_create_scsi_endpoint", 00:05:53.472 "vfu_virtio_scsi_remove_target", 00:05:53.472 "vfu_virtio_scsi_add_target", 00:05:53.472 "vfu_virtio_create_blk_endpoint", 00:05:53.472 "vfu_virtio_delete_endpoint", 00:05:53.472 "iaa_scan_accel_module", 00:05:53.472 "dsa_scan_accel_module", 00:05:53.472 "ioat_scan_accel_module", 00:05:53.472 "accel_error_inject_error", 00:05:53.472 "bdev_iscsi_delete", 00:05:53.472 "bdev_iscsi_create", 00:05:53.472 "bdev_iscsi_set_options", 00:05:53.472 "bdev_virtio_attach_controller", 00:05:53.472 "bdev_virtio_scsi_get_devices", 00:05:53.472 "bdev_virtio_detach_controller", 00:05:53.472 "bdev_virtio_blk_set_hotplug", 00:05:53.472 "bdev_ftl_set_property", 00:05:53.472 "bdev_ftl_get_properties", 00:05:53.472 "bdev_ftl_get_stats", 00:05:53.472 "bdev_ftl_unmap", 00:05:53.472 "bdev_ftl_unload", 00:05:53.472 "bdev_ftl_delete", 00:05:53.472 "bdev_ftl_load", 00:05:53.472 "bdev_ftl_create", 00:05:53.472 "bdev_aio_delete", 00:05:53.472 "bdev_aio_rescan", 00:05:53.472 "bdev_aio_create", 00:05:53.472 "blobfs_create", 00:05:53.472 "blobfs_detect", 00:05:53.472 "blobfs_set_cache_size", 00:05:53.472 "bdev_zone_block_delete", 00:05:53.473 "bdev_zone_block_create", 00:05:53.473 "bdev_delay_delete", 00:05:53.473 "bdev_delay_create", 00:05:53.473 "bdev_delay_update_latency", 00:05:53.473 "bdev_split_delete", 00:05:53.473 "bdev_split_create", 00:05:53.473 "bdev_error_inject_error", 00:05:53.473 "bdev_error_delete", 00:05:53.473 "bdev_error_create", 00:05:53.473 "bdev_raid_set_options", 00:05:53.473 "bdev_raid_remove_base_bdev", 00:05:53.473 "bdev_raid_add_base_bdev", 00:05:53.473 "bdev_raid_delete", 00:05:53.473 "bdev_raid_create", 00:05:53.473 "bdev_raid_get_bdevs", 00:05:53.473 "bdev_lvol_set_parent_bdev", 00:05:53.473 "bdev_lvol_set_parent", 00:05:53.473 "bdev_lvol_check_shallow_copy", 00:05:53.473 "bdev_lvol_start_shallow_copy", 00:05:53.473 "bdev_lvol_grow_lvstore", 00:05:53.473 "bdev_lvol_get_lvols", 00:05:53.473 "bdev_lvol_get_lvstores", 00:05:53.473 "bdev_lvol_delete", 00:05:53.473 "bdev_lvol_set_read_only", 00:05:53.473 "bdev_lvol_resize", 00:05:53.473 "bdev_lvol_decouple_parent", 00:05:53.473 "bdev_lvol_inflate", 00:05:53.473 "bdev_lvol_rename", 00:05:53.473 "bdev_lvol_clone_bdev", 00:05:53.473 "bdev_lvol_clone", 00:05:53.473 "bdev_lvol_snapshot", 00:05:53.473 "bdev_lvol_create", 00:05:53.473 "bdev_lvol_delete_lvstore", 00:05:53.473 "bdev_lvol_rename_lvstore", 00:05:53.473 "bdev_lvol_create_lvstore", 00:05:53.473 "bdev_passthru_delete", 00:05:53.473 "bdev_passthru_create", 00:05:53.473 "bdev_nvme_cuse_unregister", 00:05:53.473 "bdev_nvme_cuse_register", 00:05:53.473 "bdev_opal_new_user", 00:05:53.473 "bdev_opal_set_lock_state", 00:05:53.473 "bdev_opal_delete", 00:05:53.473 "bdev_opal_get_info", 00:05:53.473 "bdev_opal_create", 00:05:53.473 "bdev_nvme_opal_revert", 00:05:53.473 "bdev_nvme_opal_init", 00:05:53.473 "bdev_nvme_send_cmd", 00:05:53.473 "bdev_nvme_set_keys", 00:05:53.473 "bdev_nvme_get_path_iostat", 00:05:53.473 "bdev_nvme_get_mdns_discovery_info", 00:05:53.473 "bdev_nvme_stop_mdns_discovery", 00:05:53.473 "bdev_nvme_start_mdns_discovery", 00:05:53.473 "bdev_nvme_set_multipath_policy", 00:05:53.473 "bdev_nvme_set_preferred_path", 00:05:53.473 "bdev_nvme_get_io_paths", 00:05:53.473 "bdev_nvme_remove_error_injection", 00:05:53.473 "bdev_nvme_add_error_injection", 00:05:53.473 "bdev_nvme_get_discovery_info", 00:05:53.473 "bdev_nvme_stop_discovery", 00:05:53.473 "bdev_nvme_start_discovery", 00:05:53.473 "bdev_nvme_get_controller_health_info", 00:05:53.473 "bdev_nvme_disable_controller", 00:05:53.473 "bdev_nvme_enable_controller", 00:05:53.473 "bdev_nvme_reset_controller", 00:05:53.473 "bdev_nvme_get_transport_statistics", 00:05:53.473 "bdev_nvme_apply_firmware", 00:05:53.473 "bdev_nvme_detach_controller", 00:05:53.473 "bdev_nvme_get_controllers", 00:05:53.473 "bdev_nvme_attach_controller", 00:05:53.473 "bdev_nvme_set_hotplug", 00:05:53.473 "bdev_nvme_set_options", 00:05:53.473 "bdev_null_resize", 00:05:53.473 "bdev_null_delete", 00:05:53.473 "bdev_null_create", 00:05:53.473 "bdev_malloc_delete", 00:05:53.473 "bdev_malloc_create" 00:05:53.473 ] 00:05:53.473 15:05:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:53.473 15:05:18 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:53.473 15:05:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.473 15:05:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:53.473 15:05:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2355163 00:05:53.473 15:05:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2355163 ']' 00:05:53.473 15:05:18 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2355163 00:05:53.473 15:05:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:53.473 15:05:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.473 15:05:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2355163 00:05:53.473 15:05:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.473 15:05:18 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.473 15:05:18 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2355163' 00:05:53.473 killing process with pid 2355163 00:05:53.473 15:05:18 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2355163 00:05:53.473 15:05:18 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2355163 00:05:53.731 00:05:53.731 real 0m1.163s 00:05:53.731 user 0m1.955s 00:05:53.731 sys 0m0.500s 00:05:53.731 15:05:19 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.731 15:05:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.731 ************************************ 00:05:53.731 END TEST spdkcli_tcp 00:05:53.731 ************************************ 00:05:53.989 15:05:19 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.989 15:05:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.989 15:05:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.989 15:05:19 -- common/autotest_common.sh@10 -- # set +x 00:05:53.989 ************************************ 00:05:53.989 START TEST dpdk_mem_utility 00:05:53.989 ************************************ 00:05:53.989 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.989 * Looking for test storage... 00:05:53.989 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:53.989 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.989 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.989 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.990 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.990 15:05:19 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.990 15:05:19 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.990 15:05:19 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.990 15:05:19 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.990 15:05:19 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.990 15:05:19 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.991 15:05:19 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:53.991 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.991 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.991 --rc genhtml_branch_coverage=1 00:05:53.992 --rc genhtml_function_coverage=1 00:05:53.992 --rc genhtml_legend=1 00:05:53.992 --rc geninfo_all_blocks=1 00:05:53.992 --rc geninfo_unexecuted_blocks=1 00:05:53.992 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:53.992 ' 00:05:53.992 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.992 --rc genhtml_branch_coverage=1 00:05:53.992 --rc genhtml_function_coverage=1 00:05:53.992 --rc genhtml_legend=1 00:05:53.992 --rc geninfo_all_blocks=1 00:05:53.992 --rc geninfo_unexecuted_blocks=1 00:05:53.992 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:53.992 ' 00:05:53.992 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.992 --rc genhtml_branch_coverage=1 00:05:53.992 --rc genhtml_function_coverage=1 00:05:53.992 --rc genhtml_legend=1 00:05:53.992 --rc geninfo_all_blocks=1 00:05:53.992 --rc geninfo_unexecuted_blocks=1 00:05:53.992 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:53.992 ' 00:05:53.992 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.992 --rc genhtml_branch_coverage=1 00:05:53.992 --rc genhtml_function_coverage=1 00:05:53.992 --rc genhtml_legend=1 00:05:53.992 --rc geninfo_all_blocks=1 00:05:53.992 --rc geninfo_unexecuted_blocks=1 00:05:53.992 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:53.992 ' 00:05:53.992 15:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:53.992 15:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.992 15:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2355505 00:05:53.992 15:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2355505 00:05:53.993 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2355505 ']' 00:05:53.993 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.993 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.993 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.993 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.993 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:54.255 [2024-11-27 15:05:19.332928] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:05:54.255 [2024-11-27 15:05:19.332999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2355505 ] 00:05:54.255 [2024-11-27 15:05:19.401258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.255 [2024-11-27 15:05:19.441269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.514 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.514 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:54.514 15:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:54.514 15:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:54.514 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.514 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:54.514 { 00:05:54.514 "filename": "/tmp/spdk_mem_dump.txt" 00:05:54.514 } 00:05:54.514 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.514 15:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:54.514 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:54.514 1 heaps totaling size 818.000000 MiB 00:05:54.514 size: 818.000000 MiB heap id: 0 00:05:54.514 end heaps---------- 00:05:54.514 9 mempools totaling size 603.782043 MiB 00:05:54.514 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:54.514 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:54.514 size: 100.555481 MiB name: bdev_io_2355505 00:05:54.514 size: 50.003479 MiB name: msgpool_2355505 00:05:54.514 size: 36.509338 MiB name: fsdev_io_2355505 00:05:54.514 size: 21.763794 MiB name: PDU_Pool 00:05:54.514 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:54.514 size: 4.133484 MiB name: evtpool_2355505 00:05:54.514 size: 0.026123 MiB name: Session_Pool 00:05:54.514 end mempools------- 00:05:54.514 6 memzones totaling size 4.142822 MiB 00:05:54.514 size: 1.000366 MiB name: RG_ring_0_2355505 00:05:54.514 size: 1.000366 MiB name: RG_ring_1_2355505 00:05:54.514 size: 1.000366 MiB name: RG_ring_4_2355505 00:05:54.514 size: 1.000366 MiB name: RG_ring_5_2355505 00:05:54.514 size: 0.125366 MiB name: RG_ring_2_2355505 00:05:54.514 size: 0.015991 MiB name: RG_ring_3_2355505 00:05:54.514 end memzones------- 00:05:54.514 15:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:54.514 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:54.514 list of free elements. size: 10.852478 MiB 00:05:54.514 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:54.514 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:54.514 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:54.514 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:54.514 element at address: 0x200008000000 with size: 0.959839 MiB 00:05:54.514 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:54.514 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:54.514 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:54.514 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:54.514 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:54.514 element at address: 0x200003e00000 with size: 0.490723 MiB 00:05:54.514 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:54.514 element at address: 0x200010600000 with size: 0.481934 MiB 00:05:54.514 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:54.514 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:54.514 list of standard malloc elements. size: 199.218628 MiB 00:05:54.514 element at address: 0x2000081fff80 with size: 132.000122 MiB 00:05:54.514 element at address: 0x200003ffff80 with size: 64.000122 MiB 00:05:54.514 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:54.514 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:54.514 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:54.514 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:54.514 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:54.514 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:54.514 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:54.514 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:54.514 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:54.514 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:54.514 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:54.514 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:54.514 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:54.514 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:54.514 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:54.514 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:54.514 element at address: 0x20000085b100 with size: 0.000183 MiB 00:05:54.514 element at address: 0x2000008db3c0 with size: 0.000183 MiB 00:05:54.514 element at address: 0x2000008db5c0 with size: 0.000183 MiB 00:05:54.514 element at address: 0x2000008df880 with size: 0.000183 MiB 00:05:54.514 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:54.514 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:54.514 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:54.514 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:54.514 element at address: 0x200003e7da00 with size: 0.000183 MiB 00:05:54.514 element at address: 0x200003e7dac0 with size: 0.000183 MiB 00:05:54.514 element at address: 0x200003efdd80 with size: 0.000183 MiB 00:05:54.514 element at address: 0x2000080fdd80 with size: 0.000183 MiB 00:05:54.514 element at address: 0x20001067b600 with size: 0.000183 MiB 00:05:54.514 element at address: 0x20001067b6c0 with size: 0.000183 MiB 00:05:54.514 element at address: 0x2000106fb980 with size: 0.000183 MiB 00:05:54.514 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:54.514 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:54.514 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:54.514 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:54.514 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:54.514 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:54.514 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:54.514 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:54.514 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:54.514 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:54.514 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:54.514 list of memzone associated elements. size: 607.928894 MiB 00:05:54.514 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:54.514 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:54.514 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:54.514 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:54.514 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:54.514 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2355505_0 00:05:54.514 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:54.514 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2355505_0 00:05:54.514 element at address: 0x2000107fdb80 with size: 36.008911 MiB 00:05:54.514 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2355505_0 00:05:54.514 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:54.514 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:54.514 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:54.514 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:54.514 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:54.514 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2355505_0 00:05:54.514 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:54.514 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2355505 00:05:54.514 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:54.514 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2355505 00:05:54.514 element at address: 0x2000106fba40 with size: 1.008118 MiB 00:05:54.514 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:54.514 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:54.514 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:54.514 element at address: 0x2000080fde40 with size: 1.008118 MiB 00:05:54.514 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:54.514 element at address: 0x200003efde40 with size: 1.008118 MiB 00:05:54.514 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:54.514 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:54.514 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2355505 00:05:54.514 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:54.514 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2355505 00:05:54.514 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:54.514 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2355505 00:05:54.514 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:54.514 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2355505 00:05:54.514 element at address: 0x20000085b1c0 with size: 0.500488 MiB 00:05:54.514 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2355505 00:05:54.514 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:54.514 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2355505 00:05:54.514 element at address: 0x20001067b780 with size: 0.500488 MiB 00:05:54.514 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:54.514 element at address: 0x200003e7db80 with size: 0.500488 MiB 00:05:54.514 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:54.514 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:54.514 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:54.514 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:54.514 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2355505 00:05:54.514 element at address: 0x2000008df940 with size: 0.125488 MiB 00:05:54.514 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2355505 00:05:54.514 element at address: 0x2000080f5b80 with size: 0.031738 MiB 00:05:54.514 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:54.514 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:54.514 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:54.514 element at address: 0x2000008db680 with size: 0.016113 MiB 00:05:54.514 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2355505 00:05:54.514 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:54.514 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:54.514 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:54.514 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2355505 00:05:54.514 element at address: 0x2000008db480 with size: 0.000305 MiB 00:05:54.514 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2355505 00:05:54.514 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:54.514 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2355505 00:05:54.514 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:54.514 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:54.514 15:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:54.514 15:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2355505 00:05:54.514 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2355505 ']' 00:05:54.514 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2355505 00:05:54.514 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:54.514 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.514 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2355505 00:05:54.514 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.514 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.514 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2355505' 00:05:54.514 killing process with pid 2355505 00:05:54.514 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2355505 00:05:54.514 15:05:19 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2355505 00:05:55.081 00:05:55.081 real 0m0.988s 00:05:55.081 user 0m0.910s 00:05:55.081 sys 0m0.425s 00:05:55.081 15:05:20 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.081 15:05:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:55.081 ************************************ 00:05:55.081 END TEST dpdk_mem_utility 00:05:55.081 ************************************ 00:05:55.081 15:05:20 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:55.081 15:05:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.081 15:05:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.081 15:05:20 -- common/autotest_common.sh@10 -- # set +x 00:05:55.081 ************************************ 00:05:55.081 START TEST event 00:05:55.081 ************************************ 00:05:55.081 15:05:20 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:55.081 * Looking for test storage... 00:05:55.081 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:55.081 15:05:20 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.081 15:05:20 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.081 15:05:20 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.081 15:05:20 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.081 15:05:20 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.081 15:05:20 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.081 15:05:20 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.081 15:05:20 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.081 15:05:20 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.081 15:05:20 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.081 15:05:20 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.081 15:05:20 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.081 15:05:20 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.081 15:05:20 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.081 15:05:20 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.081 15:05:20 event -- scripts/common.sh@344 -- # case "$op" in 00:05:55.081 15:05:20 event -- scripts/common.sh@345 -- # : 1 00:05:55.081 15:05:20 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.081 15:05:20 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.081 15:05:20 event -- scripts/common.sh@365 -- # decimal 1 00:05:55.081 15:05:20 event -- scripts/common.sh@353 -- # local d=1 00:05:55.081 15:05:20 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.081 15:05:20 event -- scripts/common.sh@355 -- # echo 1 00:05:55.081 15:05:20 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.081 15:05:20 event -- scripts/common.sh@366 -- # decimal 2 00:05:55.081 15:05:20 event -- scripts/common.sh@353 -- # local d=2 00:05:55.081 15:05:20 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.081 15:05:20 event -- scripts/common.sh@355 -- # echo 2 00:05:55.081 15:05:20 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.081 15:05:20 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.081 15:05:20 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.081 15:05:20 event -- scripts/common.sh@368 -- # return 0 00:05:55.081 15:05:20 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.081 15:05:20 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.081 --rc genhtml_branch_coverage=1 00:05:55.081 --rc genhtml_function_coverage=1 00:05:55.081 --rc genhtml_legend=1 00:05:55.081 --rc geninfo_all_blocks=1 00:05:55.081 --rc geninfo_unexecuted_blocks=1 00:05:55.081 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:55.081 ' 00:05:55.081 15:05:20 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.081 --rc genhtml_branch_coverage=1 00:05:55.081 --rc genhtml_function_coverage=1 00:05:55.082 --rc genhtml_legend=1 00:05:55.082 --rc geninfo_all_blocks=1 00:05:55.082 --rc geninfo_unexecuted_blocks=1 00:05:55.082 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:55.082 ' 00:05:55.082 15:05:20 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.082 --rc genhtml_branch_coverage=1 00:05:55.082 --rc genhtml_function_coverage=1 00:05:55.082 --rc genhtml_legend=1 00:05:55.082 --rc geninfo_all_blocks=1 00:05:55.082 --rc geninfo_unexecuted_blocks=1 00:05:55.082 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:55.082 ' 00:05:55.082 15:05:20 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.082 --rc genhtml_branch_coverage=1 00:05:55.082 --rc genhtml_function_coverage=1 00:05:55.082 --rc genhtml_legend=1 00:05:55.082 --rc geninfo_all_blocks=1 00:05:55.082 --rc geninfo_unexecuted_blocks=1 00:05:55.082 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:55.082 ' 00:05:55.082 15:05:20 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:55.082 15:05:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:55.082 15:05:20 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:55.082 15:05:20 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:55.082 15:05:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.082 15:05:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.340 ************************************ 00:05:55.340 START TEST event_perf 00:05:55.340 ************************************ 00:05:55.340 15:05:20 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:55.340 Running I/O for 1 seconds...[2024-11-27 15:05:20.437177] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:05:55.340 [2024-11-27 15:05:20.437251] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2355831 ] 00:05:55.340 [2024-11-27 15:05:20.508491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.340 [2024-11-27 15:05:20.551960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.340 [2024-11-27 15:05:20.552056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.340 [2024-11-27 15:05:20.552145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.340 [2024-11-27 15:05:20.552147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.273 Running I/O for 1 seconds... 00:05:56.273 lcore 0: 192570 00:05:56.273 lcore 1: 192570 00:05:56.273 lcore 2: 192571 00:05:56.273 lcore 3: 192569 00:05:56.273 done. 00:05:56.273 00:05:56.273 real 0m1.165s 00:05:56.273 user 0m4.082s 00:05:56.273 sys 0m0.082s 00:05:56.273 15:05:21 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.273 15:05:21 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:56.273 ************************************ 00:05:56.273 END TEST event_perf 00:05:56.273 ************************************ 00:05:56.531 15:05:21 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:56.531 15:05:21 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:56.531 15:05:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.531 15:05:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.531 ************************************ 00:05:56.531 START TEST event_reactor 00:05:56.531 ************************************ 00:05:56.531 15:05:21 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:56.531 [2024-11-27 15:05:21.684413] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:05:56.531 [2024-11-27 15:05:21.684520] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2355995 ] 00:05:56.531 [2024-11-27 15:05:21.758498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.531 [2024-11-27 15:05:21.797883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.902 test_start 00:05:57.902 oneshot 00:05:57.902 tick 100 00:05:57.902 tick 100 00:05:57.902 tick 250 00:05:57.902 tick 100 00:05:57.902 tick 100 00:05:57.902 tick 100 00:05:57.902 tick 250 00:05:57.902 tick 500 00:05:57.902 tick 100 00:05:57.903 tick 100 00:05:57.903 tick 250 00:05:57.903 tick 100 00:05:57.903 tick 100 00:05:57.903 test_end 00:05:57.903 00:05:57.903 real 0m1.166s 00:05:57.903 user 0m1.078s 00:05:57.903 sys 0m0.085s 00:05:57.903 15:05:22 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.903 15:05:22 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:57.903 ************************************ 00:05:57.903 END TEST event_reactor 00:05:57.903 ************************************ 00:05:57.903 15:05:22 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:57.903 15:05:22 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:57.903 15:05:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.903 15:05:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.903 ************************************ 00:05:57.903 START TEST event_reactor_perf 00:05:57.903 ************************************ 00:05:57.903 15:05:22 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:57.903 [2024-11-27 15:05:22.916588] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:05:57.903 [2024-11-27 15:05:22.916702] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2356151 ] 00:05:57.903 [2024-11-27 15:05:22.990479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.903 [2024-11-27 15:05:23.030028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.835 test_start 00:05:58.835 test_end 00:05:58.835 Performance: 963897 events per second 00:05:58.835 00:05:58.835 real 0m1.167s 00:05:58.835 user 0m1.084s 00:05:58.835 sys 0m0.079s 00:05:58.835 15:05:24 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.835 15:05:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:58.835 ************************************ 00:05:58.835 END TEST event_reactor_perf 00:05:58.835 ************************************ 00:05:58.835 15:05:24 event -- event/event.sh@49 -- # uname -s 00:05:58.835 15:05:24 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:58.835 15:05:24 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:58.835 15:05:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.835 15:05:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.835 15:05:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.835 ************************************ 00:05:58.835 START TEST event_scheduler 00:05:58.835 ************************************ 00:05:58.835 15:05:24 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:59.124 * Looking for test storage... 00:05:59.124 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:59.124 15:05:24 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:59.124 15:05:24 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:59.124 15:05:24 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:59.124 15:05:24 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:59.124 15:05:24 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.124 15:05:24 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.124 15:05:24 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.124 15:05:24 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.124 15:05:24 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.124 15:05:24 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.124 15:05:24 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.124 15:05:24 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.124 15:05:24 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.124 15:05:24 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.124 15:05:24 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.124 15:05:24 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:59.124 15:05:24 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:59.124 15:05:24 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.124 15:05:24 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.125 15:05:24 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:59.125 15:05:24 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:59.125 15:05:24 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.125 15:05:24 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:59.125 15:05:24 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.125 15:05:24 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:59.125 15:05:24 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:59.125 15:05:24 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.125 15:05:24 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:59.125 15:05:24 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.125 15:05:24 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.125 15:05:24 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.125 15:05:24 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:59.125 15:05:24 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.125 15:05:24 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:59.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.125 --rc genhtml_branch_coverage=1 00:05:59.125 --rc genhtml_function_coverage=1 00:05:59.125 --rc genhtml_legend=1 00:05:59.125 --rc geninfo_all_blocks=1 00:05:59.125 --rc geninfo_unexecuted_blocks=1 00:05:59.125 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:59.125 ' 00:05:59.125 15:05:24 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:59.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.125 --rc genhtml_branch_coverage=1 00:05:59.125 --rc genhtml_function_coverage=1 00:05:59.125 --rc genhtml_legend=1 00:05:59.125 --rc geninfo_all_blocks=1 00:05:59.125 --rc geninfo_unexecuted_blocks=1 00:05:59.125 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:59.125 ' 00:05:59.125 15:05:24 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:59.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.125 --rc genhtml_branch_coverage=1 00:05:59.125 --rc genhtml_function_coverage=1 00:05:59.125 --rc genhtml_legend=1 00:05:59.125 --rc geninfo_all_blocks=1 00:05:59.125 --rc geninfo_unexecuted_blocks=1 00:05:59.125 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:59.125 ' 00:05:59.125 15:05:24 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:59.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.125 --rc genhtml_branch_coverage=1 00:05:59.125 --rc genhtml_function_coverage=1 00:05:59.125 --rc genhtml_legend=1 00:05:59.125 --rc geninfo_all_blocks=1 00:05:59.125 --rc geninfo_unexecuted_blocks=1 00:05:59.125 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:59.125 ' 00:05:59.125 15:05:24 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:59.125 15:05:24 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2356467 00:05:59.125 15:05:24 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:59.125 15:05:24 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.125 15:05:24 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2356467 00:05:59.125 15:05:24 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2356467 ']' 00:05:59.125 15:05:24 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.125 15:05:24 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.125 15:05:24 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.125 15:05:24 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.125 15:05:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.125 [2024-11-27 15:05:24.377812] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:05:59.125 [2024-11-27 15:05:24.377902] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2356467 ] 00:05:59.384 [2024-11-27 15:05:24.448614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:59.384 [2024-11-27 15:05:24.494972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.384 [2024-11-27 15:05:24.494999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.384 [2024-11-27 15:05:24.495084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.384 [2024-11-27 15:05:24.495086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.384 15:05:24 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.384 15:05:24 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:59.384 15:05:24 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:59.384 15:05:24 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.384 15:05:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.384 [2024-11-27 15:05:24.559262] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:59.384 [2024-11-27 15:05:24.559285] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:59.384 [2024-11-27 15:05:24.559297] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:59.384 [2024-11-27 15:05:24.559305] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:59.384 [2024-11-27 15:05:24.559312] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:59.384 15:05:24 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.384 15:05:24 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:59.384 15:05:24 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.384 15:05:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.384 [2024-11-27 15:05:24.634004] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:59.384 15:05:24 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.384 15:05:24 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:59.384 15:05:24 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.384 15:05:24 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.384 15:05:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.384 ************************************ 00:05:59.384 START TEST scheduler_create_thread 00:05:59.384 ************************************ 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.384 2 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.384 3 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.384 4 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.384 5 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.384 6 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.384 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.642 7 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.642 8 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.642 9 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.642 10 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.642 15:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.205 15:05:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.205 15:05:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:00.205 15:05:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.205 15:05:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.580 15:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.580 15:05:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:01.580 15:05:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:01.580 15:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.580 15:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.512 15:05:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.512 00:06:02.512 real 0m3.097s 00:06:02.512 user 0m0.024s 00:06:02.512 sys 0m0.007s 00:06:02.512 15:05:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.512 15:05:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.512 ************************************ 00:06:02.512 END TEST scheduler_create_thread 00:06:02.512 ************************************ 00:06:02.512 15:05:27 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:02.512 15:05:27 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2356467 00:06:02.512 15:05:27 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2356467 ']' 00:06:02.512 15:05:27 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2356467 00:06:02.512 15:05:27 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:02.512 15:05:27 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.512 15:05:27 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2356467 00:06:02.768 15:05:27 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:02.768 15:05:27 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:02.768 15:05:27 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2356467' 00:06:02.768 killing process with pid 2356467 00:06:02.768 15:05:27 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2356467 00:06:02.768 15:05:27 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2356467 00:06:03.025 [2024-11-27 15:05:28.149173] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:03.026 00:06:03.026 real 0m4.175s 00:06:03.026 user 0m6.688s 00:06:03.026 sys 0m0.413s 00:06:03.026 15:05:28 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.026 15:05:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.026 ************************************ 00:06:03.026 END TEST event_scheduler 00:06:03.026 ************************************ 00:06:03.283 15:05:28 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:03.283 15:05:28 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:03.283 15:05:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.283 15:05:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.283 15:05:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.283 ************************************ 00:06:03.283 START TEST app_repeat 00:06:03.283 ************************************ 00:06:03.283 15:05:28 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:03.283 15:05:28 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.283 15:05:28 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.283 15:05:28 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:03.283 15:05:28 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.283 15:05:28 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:03.283 15:05:28 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:03.283 15:05:28 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:03.283 15:05:28 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2357309 00:06:03.283 15:05:28 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.283 15:05:28 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:03.283 15:05:28 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2357309' 00:06:03.283 Process app_repeat pid: 2357309 00:06:03.283 15:05:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:03.283 15:05:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:03.283 spdk_app_start Round 0 00:06:03.283 15:05:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2357309 /var/tmp/spdk-nbd.sock 00:06:03.283 15:05:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2357309 ']' 00:06:03.283 15:05:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.283 15:05:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.283 15:05:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.283 15:05:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.283 15:05:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.283 [2024-11-27 15:05:28.453519] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:03.283 [2024-11-27 15:05:28.453628] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2357309 ] 00:06:03.283 [2024-11-27 15:05:28.526390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.283 [2024-11-27 15:05:28.566215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.283 [2024-11-27 15:05:28.566217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.540 15:05:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.540 15:05:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:03.540 15:05:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.540 Malloc0 00:06:03.540 15:05:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.798 Malloc1 00:06:03.798 15:05:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.798 15:05:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.798 15:05:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.798 15:05:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.798 15:05:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.798 15:05:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.798 15:05:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.798 15:05:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.798 15:05:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.798 15:05:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.798 15:05:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.798 15:05:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.798 15:05:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:03.798 15:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.798 15:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.798 15:05:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.055 /dev/nbd0 00:06:04.055 15:05:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.055 15:05:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.055 15:05:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:04.055 15:05:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:04.055 15:05:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.055 15:05:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.055 15:05:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:04.055 15:05:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:04.055 15:05:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.055 15:05:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.055 15:05:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.055 1+0 records in 00:06:04.055 1+0 records out 00:06:04.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228857 s, 17.9 MB/s 00:06:04.055 15:05:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:04.055 15:05:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:04.055 15:05:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:04.055 15:05:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.055 15:05:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:04.055 15:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.055 15:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.055 15:05:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:04.312 /dev/nbd1 00:06:04.312 15:05:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:04.312 15:05:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:04.312 15:05:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:04.312 15:05:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:04.312 15:05:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.312 15:05:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.312 15:05:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:04.312 15:05:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:04.312 15:05:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.312 15:05:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.312 15:05:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.312 1+0 records in 00:06:04.312 1+0 records out 00:06:04.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250148 s, 16.4 MB/s 00:06:04.312 15:05:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:04.312 15:05:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:04.312 15:05:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:04.312 15:05:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.312 15:05:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:04.312 15:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.312 15:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.312 15:05:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.312 15:05:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.312 15:05:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.570 { 00:06:04.570 "nbd_device": "/dev/nbd0", 00:06:04.570 "bdev_name": "Malloc0" 00:06:04.570 }, 00:06:04.570 { 00:06:04.570 "nbd_device": "/dev/nbd1", 00:06:04.570 "bdev_name": "Malloc1" 00:06:04.570 } 00:06:04.570 ]' 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.570 { 00:06:04.570 "nbd_device": "/dev/nbd0", 00:06:04.570 "bdev_name": "Malloc0" 00:06:04.570 }, 00:06:04.570 { 00:06:04.570 "nbd_device": "/dev/nbd1", 00:06:04.570 "bdev_name": "Malloc1" 00:06:04.570 } 00:06:04.570 ]' 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.570 /dev/nbd1' 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.570 /dev/nbd1' 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.570 256+0 records in 00:06:04.570 256+0 records out 00:06:04.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011032 s, 95.0 MB/s 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.570 256+0 records in 00:06:04.570 256+0 records out 00:06:04.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196833 s, 53.3 MB/s 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.570 256+0 records in 00:06:04.570 256+0 records out 00:06:04.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215032 s, 48.8 MB/s 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.570 15:05:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.827 15:05:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.827 15:05:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.827 15:05:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.827 15:05:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.827 15:05:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.827 15:05:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.827 15:05:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.827 15:05:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.827 15:05:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.827 15:05:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.084 15:05:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.084 15:05:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.084 15:05:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.084 15:05:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.084 15:05:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.084 15:05:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.084 15:05:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.084 15:05:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.084 15:05:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.084 15:05:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.084 15:05:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.342 15:05:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.342 15:05:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.342 15:05:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.342 15:05:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.342 15:05:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.342 15:05:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.342 15:05:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:05.342 15:05:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.342 15:05:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.342 15:05:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.342 15:05:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.342 15:05:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.342 15:05:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:05.600 15:05:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:05.600 [2024-11-27 15:05:30.935503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.857 [2024-11-27 15:05:30.972425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.857 [2024-11-27 15:05:30.972426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.857 [2024-11-27 15:05:31.013817] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.857 [2024-11-27 15:05:31.013868] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.138 15:05:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:09.138 15:05:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:09.138 spdk_app_start Round 1 00:06:09.138 15:05:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2357309 /var/tmp/spdk-nbd.sock 00:06:09.138 15:05:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2357309 ']' 00:06:09.138 15:05:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.138 15:05:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.138 15:05:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.139 15:05:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.139 15:05:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.139 15:05:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.139 15:05:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:09.139 15:05:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.139 Malloc0 00:06:09.139 15:05:34 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.139 Malloc1 00:06:09.139 15:05:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.139 15:05:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.139 15:05:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.139 15:05:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:09.139 15:05:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.139 15:05:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:09.139 15:05:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.139 15:05:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.139 15:05:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.139 15:05:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:09.139 15:05:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.139 15:05:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:09.139 15:05:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:09.139 15:05:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:09.139 15:05:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.139 15:05:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:09.397 /dev/nbd0 00:06:09.397 15:05:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:09.397 15:05:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:09.397 15:05:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:09.397 15:05:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:09.397 15:05:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:09.397 15:05:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:09.397 15:05:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:09.397 15:05:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:09.397 15:05:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:09.397 15:05:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:09.397 15:05:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.397 1+0 records in 00:06:09.397 1+0 records out 00:06:09.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236171 s, 17.3 MB/s 00:06:09.397 15:05:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:09.397 15:05:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:09.397 15:05:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:09.397 15:05:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:09.397 15:05:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:09.397 15:05:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.397 15:05:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.397 15:05:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.655 /dev/nbd1 00:06:09.655 15:05:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.655 15:05:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.655 15:05:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:09.655 15:05:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:09.655 15:05:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:09.655 15:05:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:09.655 15:05:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:09.655 15:05:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:09.655 15:05:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:09.655 15:05:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:09.655 15:05:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.655 1+0 records in 00:06:09.655 1+0 records out 00:06:09.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261787 s, 15.6 MB/s 00:06:09.655 15:05:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:09.655 15:05:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:09.655 15:05:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:09.655 15:05:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:09.655 15:05:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:09.655 15:05:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.655 15:05:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.655 15:05:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.655 15:05:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.655 15:05:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.914 { 00:06:09.914 "nbd_device": "/dev/nbd0", 00:06:09.914 "bdev_name": "Malloc0" 00:06:09.914 }, 00:06:09.914 { 00:06:09.914 "nbd_device": "/dev/nbd1", 00:06:09.914 "bdev_name": "Malloc1" 00:06:09.914 } 00:06:09.914 ]' 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.914 { 00:06:09.914 "nbd_device": "/dev/nbd0", 00:06:09.914 "bdev_name": "Malloc0" 00:06:09.914 }, 00:06:09.914 { 00:06:09.914 "nbd_device": "/dev/nbd1", 00:06:09.914 "bdev_name": "Malloc1" 00:06:09.914 } 00:06:09.914 ]' 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.914 /dev/nbd1' 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.914 /dev/nbd1' 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.914 256+0 records in 00:06:09.914 256+0 records out 00:06:09.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103677 s, 101 MB/s 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.914 256+0 records in 00:06:09.914 256+0 records out 00:06:09.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196406 s, 53.4 MB/s 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.914 256+0 records in 00:06:09.914 256+0 records out 00:06:09.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214298 s, 48.9 MB/s 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.914 15:05:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.173 15:05:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.173 15:05:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.173 15:05:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.173 15:05:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.173 15:05:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.173 15:05:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.173 15:05:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.173 15:05:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.173 15:05:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.173 15:05:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.432 15:05:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.432 15:05:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.432 15:05:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.432 15:05:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.432 15:05:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.432 15:05:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.432 15:05:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.432 15:05:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.432 15:05:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.432 15:05:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.432 15:05:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.690 15:05:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.690 15:05:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.690 15:05:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.690 15:05:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.690 15:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.690 15:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.690 15:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:10.690 15:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.690 15:05:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.690 15:05:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.690 15:05:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.690 15:05:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.690 15:05:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.948 15:05:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:10.948 [2024-11-27 15:05:36.209608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.948 [2024-11-27 15:05:36.246073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.948 [2024-11-27 15:05:36.246075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.206 [2024-11-27 15:05:36.287719] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:11.206 [2024-11-27 15:05:36.287762] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.737 15:05:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:13.737 15:05:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:13.737 spdk_app_start Round 2 00:06:13.737 15:05:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2357309 /var/tmp/spdk-nbd.sock 00:06:13.737 15:05:39 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2357309 ']' 00:06:13.737 15:05:39 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.737 15:05:39 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.737 15:05:39 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.737 15:05:39 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.737 15:05:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:13.996 15:05:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.996 15:05:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:13.996 15:05:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.254 Malloc0 00:06:14.254 15:05:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.513 Malloc1 00:06:14.513 15:05:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.513 15:05:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.513 15:05:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.513 15:05:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.513 15:05:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.513 15:05:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.513 15:05:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.513 15:05:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.513 15:05:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.513 15:05:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.513 15:05:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.513 15:05:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.513 15:05:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:14.513 15:05:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.513 15:05:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.513 15:05:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.513 /dev/nbd0 00:06:14.513 15:05:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.513 15:05:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.513 15:05:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:14.513 15:05:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:14.513 15:05:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.513 15:05:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.513 15:05:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:14.513 15:05:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:14.513 15:05:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.513 15:05:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.513 15:05:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.513 1+0 records in 00:06:14.513 1+0 records out 00:06:14.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00014843 s, 27.6 MB/s 00:06:14.513 15:05:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:14.513 15:05:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:14.513 15:05:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:14.513 15:05:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.513 15:05:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:14.773 15:05:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.773 15:05:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.773 15:05:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.773 /dev/nbd1 00:06:14.773 15:05:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.773 15:05:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.773 15:05:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:14.773 15:05:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:14.773 15:05:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.773 15:05:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.773 15:05:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:14.773 15:05:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:14.773 15:05:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.773 15:05:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.773 15:05:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.773 1+0 records in 00:06:14.773 1+0 records out 00:06:14.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185171 s, 22.1 MB/s 00:06:14.773 15:05:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:14.773 15:05:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:14.773 15:05:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:14.773 15:05:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.773 15:05:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:14.773 15:05:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.773 15:05:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.773 15:05:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.773 15:05:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.773 15:05:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.031 15:05:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:15.031 { 00:06:15.031 "nbd_device": "/dev/nbd0", 00:06:15.031 "bdev_name": "Malloc0" 00:06:15.031 }, 00:06:15.031 { 00:06:15.031 "nbd_device": "/dev/nbd1", 00:06:15.031 "bdev_name": "Malloc1" 00:06:15.031 } 00:06:15.031 ]' 00:06:15.031 15:05:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:15.031 { 00:06:15.031 "nbd_device": "/dev/nbd0", 00:06:15.031 "bdev_name": "Malloc0" 00:06:15.031 }, 00:06:15.031 { 00:06:15.031 "nbd_device": "/dev/nbd1", 00:06:15.031 "bdev_name": "Malloc1" 00:06:15.031 } 00:06:15.031 ]' 00:06:15.031 15:05:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.031 15:05:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:15.031 /dev/nbd1' 00:06:15.031 15:05:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:15.031 /dev/nbd1' 00:06:15.031 15:05:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.031 15:05:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:15.031 15:05:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:15.031 15:05:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:15.031 15:05:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:15.031 15:05:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:15.031 15:05:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.031 15:05:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.031 15:05:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:15.031 15:05:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.031 15:05:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:15.031 15:05:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:15.031 256+0 records in 00:06:15.031 256+0 records out 00:06:15.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110805 s, 94.6 MB/s 00:06:15.031 15:05:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.031 15:05:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:15.290 256+0 records in 00:06:15.290 256+0 records out 00:06:15.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201609 s, 52.0 MB/s 00:06:15.290 15:05:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.291 15:05:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:15.291 256+0 records in 00:06:15.291 256+0 records out 00:06:15.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212697 s, 49.3 MB/s 00:06:15.291 15:05:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:15.291 15:05:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.291 15:05:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.291 15:05:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.291 15:05:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.291 15:05:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.291 15:05:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.291 15:05:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.291 15:05:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:15.291 15:05:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.291 15:05:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:15.291 15:05:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.291 15:05:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:15.291 15:05:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.291 15:05:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.291 15:05:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.291 15:05:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:15.291 15:05:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.291 15:05:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.550 15:05:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.808 15:05:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.808 15:05:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.808 15:05:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.808 15:05:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.808 15:05:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.808 15:05:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.808 15:05:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:15.808 15:05:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.808 15:05:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.808 15:05:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.808 15:05:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.808 15:05:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.808 15:05:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.067 15:05:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:16.325 [2024-11-27 15:05:41.475925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.325 [2024-11-27 15:05:41.512278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.325 [2024-11-27 15:05:41.512280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.325 [2024-11-27 15:05:41.553805] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.325 [2024-11-27 15:05:41.553852] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:19.610 15:05:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2357309 /var/tmp/spdk-nbd.sock 00:06:19.610 15:05:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2357309 ']' 00:06:19.610 15:05:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.610 15:05:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.610 15:05:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.610 15:05:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.610 15:05:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.610 15:05:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.610 15:05:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:19.610 15:05:44 event.app_repeat -- event/event.sh@39 -- # killprocess 2357309 00:06:19.610 15:05:44 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2357309 ']' 00:06:19.610 15:05:44 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2357309 00:06:19.610 15:05:44 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:19.610 15:05:44 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.610 15:05:44 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2357309 00:06:19.610 15:05:44 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.610 15:05:44 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.610 15:05:44 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2357309' 00:06:19.610 killing process with pid 2357309 00:06:19.610 15:05:44 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2357309 00:06:19.610 15:05:44 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2357309 00:06:19.610 spdk_app_start is called in Round 0. 00:06:19.610 Shutdown signal received, stop current app iteration 00:06:19.610 Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 reinitialization... 00:06:19.610 spdk_app_start is called in Round 1. 00:06:19.610 Shutdown signal received, stop current app iteration 00:06:19.610 Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 reinitialization... 00:06:19.610 spdk_app_start is called in Round 2. 00:06:19.610 Shutdown signal received, stop current app iteration 00:06:19.610 Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 reinitialization... 00:06:19.610 spdk_app_start is called in Round 3. 00:06:19.610 Shutdown signal received, stop current app iteration 00:06:19.610 15:05:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:19.610 15:05:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:19.610 00:06:19.610 real 0m16.298s 00:06:19.610 user 0m35.095s 00:06:19.610 sys 0m3.173s 00:06:19.610 15:05:44 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.610 15:05:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.610 ************************************ 00:06:19.610 END TEST app_repeat 00:06:19.610 ************************************ 00:06:19.610 15:05:44 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:19.610 15:05:44 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:19.610 15:05:44 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.610 15:05:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.610 15:05:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.610 ************************************ 00:06:19.610 START TEST cpu_locks 00:06:19.610 ************************************ 00:06:19.610 15:05:44 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:19.610 * Looking for test storage... 00:06:19.610 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:19.610 15:05:44 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:19.610 15:05:44 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:19.610 15:05:44 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:19.870 15:05:44 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.870 15:05:44 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:19.870 15:05:44 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.870 15:05:44 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:19.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.870 --rc genhtml_branch_coverage=1 00:06:19.870 --rc genhtml_function_coverage=1 00:06:19.870 --rc genhtml_legend=1 00:06:19.870 --rc geninfo_all_blocks=1 00:06:19.870 --rc geninfo_unexecuted_blocks=1 00:06:19.870 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:19.870 ' 00:06:19.870 15:05:44 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:19.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.870 --rc genhtml_branch_coverage=1 00:06:19.870 --rc genhtml_function_coverage=1 00:06:19.870 --rc genhtml_legend=1 00:06:19.870 --rc geninfo_all_blocks=1 00:06:19.870 --rc geninfo_unexecuted_blocks=1 00:06:19.870 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:19.870 ' 00:06:19.870 15:05:44 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:19.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.870 --rc genhtml_branch_coverage=1 00:06:19.870 --rc genhtml_function_coverage=1 00:06:19.870 --rc genhtml_legend=1 00:06:19.870 --rc geninfo_all_blocks=1 00:06:19.870 --rc geninfo_unexecuted_blocks=1 00:06:19.870 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:19.870 ' 00:06:19.870 15:05:44 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:19.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.870 --rc genhtml_branch_coverage=1 00:06:19.870 --rc genhtml_function_coverage=1 00:06:19.870 --rc genhtml_legend=1 00:06:19.870 --rc geninfo_all_blocks=1 00:06:19.870 --rc geninfo_unexecuted_blocks=1 00:06:19.870 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:19.870 ' 00:06:19.870 15:05:44 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:19.870 15:05:44 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:19.870 15:05:44 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:19.870 15:05:44 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:19.870 15:05:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.870 15:05:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.870 15:05:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.870 ************************************ 00:06:19.870 START TEST default_locks 00:06:19.870 ************************************ 00:06:19.870 15:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:19.870 15:05:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.870 15:05:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2360469 00:06:19.870 15:05:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2360469 00:06:19.870 15:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2360469 ']' 00:06:19.870 15:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.870 15:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.870 15:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.870 15:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.870 15:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.870 [2024-11-27 15:05:45.049517] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:19.870 [2024-11-27 15:05:45.049561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360469 ] 00:06:19.870 [2024-11-27 15:05:45.116856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.870 [2024-11-27 15:05:45.155869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.129 15:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.129 15:05:45 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:20.129 15:05:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2360469 00:06:20.129 15:05:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2360469 00:06:20.129 15:05:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.696 lslocks: write error 00:06:20.696 15:05:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2360469 00:06:20.696 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2360469 ']' 00:06:20.696 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2360469 00:06:20.696 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:20.696 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.696 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2360469 00:06:20.954 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.954 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.954 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2360469' 00:06:20.954 killing process with pid 2360469 00:06:20.955 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2360469 00:06:20.955 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2360469 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2360469 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2360469 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2360469 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2360469 ']' 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.214 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2360469) - No such process 00:06:21.214 ERROR: process (pid: 2360469) is no longer running 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:21.214 00:06:21.214 real 0m1.336s 00:06:21.214 user 0m1.328s 00:06:21.214 sys 0m0.674s 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.214 15:05:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.214 ************************************ 00:06:21.214 END TEST default_locks 00:06:21.214 ************************************ 00:06:21.214 15:05:46 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:21.214 15:05:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.214 15:05:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.214 15:05:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.214 ************************************ 00:06:21.214 START TEST default_locks_via_rpc 00:06:21.214 ************************************ 00:06:21.214 15:05:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:21.214 15:05:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.214 15:05:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2360727 00:06:21.214 15:05:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2360727 00:06:21.214 15:05:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2360727 ']' 00:06:21.214 15:05:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.214 15:05:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.214 15:05:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.214 15:05:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.214 15:05:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.214 [2024-11-27 15:05:46.467791] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:21.214 [2024-11-27 15:05:46.467842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360727 ] 00:06:21.214 [2024-11-27 15:05:46.536138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.473 [2024-11-27 15:05:46.580375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.473 15:05:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.473 15:05:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:21.473 15:05:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:21.473 15:05:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.473 15:05:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.473 15:05:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.473 15:05:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:21.473 15:05:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:21.473 15:05:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:21.473 15:05:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:21.473 15:05:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:21.473 15:05:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.473 15:05:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.473 15:05:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.473 15:05:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2360727 00:06:21.473 15:05:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2360727 00:06:21.473 15:05:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.043 15:05:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2360727 00:06:22.043 15:05:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2360727 ']' 00:06:22.043 15:05:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2360727 00:06:22.043 15:05:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:22.043 15:05:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.043 15:05:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2360727 00:06:22.043 15:05:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.043 15:05:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.043 15:05:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2360727' 00:06:22.043 killing process with pid 2360727 00:06:22.043 15:05:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2360727 00:06:22.043 15:05:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2360727 00:06:22.301 00:06:22.301 real 0m1.186s 00:06:22.301 user 0m1.179s 00:06:22.301 sys 0m0.550s 00:06:22.301 15:05:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.301 15:05:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.301 ************************************ 00:06:22.301 END TEST default_locks_via_rpc 00:06:22.301 ************************************ 00:06:22.561 15:05:47 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:22.561 15:05:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.561 15:05:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.561 15:05:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.561 ************************************ 00:06:22.561 START TEST non_locking_app_on_locked_coremask 00:06:22.561 ************************************ 00:06:22.561 15:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:22.561 15:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2360877 00:06:22.561 15:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2360877 /var/tmp/spdk.sock 00:06:22.561 15:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.561 15:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2360877 ']' 00:06:22.561 15:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.561 15:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.561 15:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.561 15:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.561 15:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.561 [2024-11-27 15:05:47.736641] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:22.561 [2024-11-27 15:05:47.736702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360877 ] 00:06:22.561 [2024-11-27 15:05:47.806755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.562 [2024-11-27 15:05:47.849402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.844 15:05:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.844 15:05:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:22.844 15:05:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2361046 00:06:22.844 15:05:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2361046 /var/tmp/spdk2.sock 00:06:22.844 15:05:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:22.844 15:05:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2361046 ']' 00:06:22.844 15:05:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.844 15:05:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.844 15:05:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.844 15:05:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.844 15:05:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.844 [2024-11-27 15:05:48.081350] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:22.844 [2024-11-27 15:05:48.081419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361046 ] 00:06:23.150 [2024-11-27 15:05:48.176257] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.150 [2024-11-27 15:05:48.176287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.150 [2024-11-27 15:05:48.264138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.757 15:05:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.757 15:05:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:23.757 15:05:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2360877 00:06:23.757 15:05:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2360877 00:06:23.757 15:05:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.695 lslocks: write error 00:06:24.695 15:05:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2360877 00:06:24.695 15:05:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2360877 ']' 00:06:24.695 15:05:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2360877 00:06:24.695 15:05:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:24.695 15:05:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.695 15:05:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2360877 00:06:24.695 15:05:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.695 15:05:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.695 15:05:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2360877' 00:06:24.695 killing process with pid 2360877 00:06:24.695 15:05:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2360877 00:06:24.695 15:05:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2360877 00:06:25.264 15:05:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2361046 00:06:25.264 15:05:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2361046 ']' 00:06:25.264 15:05:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2361046 00:06:25.264 15:05:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:25.264 15:05:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.264 15:05:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2361046 00:06:25.264 15:05:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.264 15:05:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.264 15:05:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2361046' 00:06:25.264 killing process with pid 2361046 00:06:25.264 15:05:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2361046 00:06:25.264 15:05:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2361046 00:06:25.523 00:06:25.523 real 0m2.992s 00:06:25.523 user 0m3.127s 00:06:25.523 sys 0m1.113s 00:06:25.523 15:05:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.523 15:05:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.523 ************************************ 00:06:25.523 END TEST non_locking_app_on_locked_coremask 00:06:25.523 ************************************ 00:06:25.523 15:05:50 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:25.523 15:05:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.523 15:05:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.523 15:05:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.523 ************************************ 00:06:25.523 START TEST locking_app_on_unlocked_coremask 00:06:25.523 ************************************ 00:06:25.523 15:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:25.523 15:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:25.523 15:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2361431 00:06:25.523 15:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2361431 /var/tmp/spdk.sock 00:06:25.523 15:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2361431 ']' 00:06:25.523 15:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.523 15:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.523 15:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.523 15:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.523 15:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.523 [2024-11-27 15:05:50.796314] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:25.523 [2024-11-27 15:05:50.796368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361431 ] 00:06:25.782 [2024-11-27 15:05:50.866354] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.782 [2024-11-27 15:05:50.866382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.782 [2024-11-27 15:05:50.909234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.782 15:05:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.782 15:05:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:25.782 15:05:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2361605 00:06:25.782 15:05:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2361605 /var/tmp/spdk2.sock 00:06:25.782 15:05:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:25.782 15:05:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2361605 ']' 00:06:25.782 15:05:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.782 15:05:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.782 15:05:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.782 15:05:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.782 15:05:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.041 [2024-11-27 15:05:51.143851] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:26.041 [2024-11-27 15:05:51.143940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361605 ] 00:06:26.041 [2024-11-27 15:05:51.239723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.041 [2024-11-27 15:05:51.326942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.977 15:05:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.977 15:05:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:26.977 15:05:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2361605 00:06:26.977 15:05:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.977 15:05:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2361605 00:06:27.913 lslocks: write error 00:06:27.913 15:05:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2361431 00:06:27.913 15:05:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2361431 ']' 00:06:27.913 15:05:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2361431 00:06:27.913 15:05:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:27.913 15:05:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.913 15:05:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2361431 00:06:27.913 15:05:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.913 15:05:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.913 15:05:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2361431' 00:06:27.913 killing process with pid 2361431 00:06:27.913 15:05:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2361431 00:06:27.913 15:05:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2361431 00:06:28.481 15:05:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2361605 00:06:28.481 15:05:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2361605 ']' 00:06:28.481 15:05:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2361605 00:06:28.481 15:05:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:28.481 15:05:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.481 15:05:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2361605 00:06:28.481 15:05:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.481 15:05:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.481 15:05:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2361605' 00:06:28.481 killing process with pid 2361605 00:06:28.481 15:05:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2361605 00:06:28.481 15:05:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2361605 00:06:28.740 00:06:28.740 real 0m3.190s 00:06:28.740 user 0m3.345s 00:06:28.740 sys 0m1.178s 00:06:28.740 15:05:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.740 15:05:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.740 ************************************ 00:06:28.740 END TEST locking_app_on_unlocked_coremask 00:06:28.740 ************************************ 00:06:28.740 15:05:54 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:28.740 15:05:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.740 15:05:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.740 15:05:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.740 ************************************ 00:06:28.740 START TEST locking_app_on_locked_coremask 00:06:28.740 ************************************ 00:06:28.740 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:28.740 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2362091 00:06:28.740 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2362091 /var/tmp/spdk.sock 00:06:28.740 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.740 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2362091 ']' 00:06:28.740 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.740 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.740 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.740 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.740 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.999 [2024-11-27 15:05:54.080907] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:28.999 [2024-11-27 15:05:54.080975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2362091 ] 00:06:28.999 [2024-11-27 15:05:54.152357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.999 [2024-11-27 15:05:54.191302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.257 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.257 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:29.257 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2362219 00:06:29.257 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2362219 /var/tmp/spdk2.sock 00:06:29.257 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:29.257 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:29.257 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2362219 /var/tmp/spdk2.sock 00:06:29.257 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:29.257 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.257 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:29.257 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.257 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2362219 /var/tmp/spdk2.sock 00:06:29.257 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2362219 ']' 00:06:29.257 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.257 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.257 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.257 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.257 15:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.257 [2024-11-27 15:05:54.425966] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:29.257 [2024-11-27 15:05:54.426057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2362219 ] 00:06:29.257 [2024-11-27 15:05:54.520100] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2362091 has claimed it. 00:06:29.257 [2024-11-27 15:05:54.520143] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:29.822 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2362219) - No such process 00:06:29.822 ERROR: process (pid: 2362219) is no longer running 00:06:29.822 15:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.822 15:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:29.822 15:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:29.822 15:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.822 15:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:29.822 15:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.822 15:05:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2362091 00:06:29.822 15:05:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2362091 00:06:29.822 15:05:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.388 lslocks: write error 00:06:30.388 15:05:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2362091 00:06:30.388 15:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2362091 ']' 00:06:30.388 15:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2362091 00:06:30.388 15:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:30.388 15:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.388 15:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2362091 00:06:30.646 15:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.646 15:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.646 15:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2362091' 00:06:30.646 killing process with pid 2362091 00:06:30.646 15:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2362091 00:06:30.646 15:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2362091 00:06:30.904 00:06:30.904 real 0m1.976s 00:06:30.904 user 0m2.095s 00:06:30.904 sys 0m0.747s 00:06:30.904 15:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.904 15:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.904 ************************************ 00:06:30.904 END TEST locking_app_on_locked_coremask 00:06:30.904 ************************************ 00:06:30.904 15:05:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:30.904 15:05:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.904 15:05:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.904 15:05:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.904 ************************************ 00:06:30.904 START TEST locking_overlapped_coremask 00:06:30.904 ************************************ 00:06:30.904 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:30.904 15:05:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2362513 00:06:30.904 15:05:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2362513 /var/tmp/spdk.sock 00:06:30.904 15:05:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:30.904 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2362513 ']' 00:06:30.904 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.904 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.904 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.904 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.904 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.904 [2024-11-27 15:05:56.135252] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:30.904 [2024-11-27 15:05:56.135332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2362513 ] 00:06:30.904 [2024-11-27 15:05:56.206938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.162 [2024-11-27 15:05:56.252146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.162 [2024-11-27 15:05:56.252241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.162 [2024-11-27 15:05:56.252241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.162 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.162 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:31.162 15:05:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2362527 00:06:31.162 15:05:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2362527 /var/tmp/spdk2.sock 00:06:31.162 15:05:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:31.162 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:31.162 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2362527 /var/tmp/spdk2.sock 00:06:31.162 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:31.162 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.162 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:31.162 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.162 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2362527 /var/tmp/spdk2.sock 00:06:31.162 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2362527 ']' 00:06:31.162 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.162 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.162 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.163 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.163 15:05:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.163 [2024-11-27 15:05:56.491463] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:31.163 [2024-11-27 15:05:56.491554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2362527 ] 00:06:31.420 [2024-11-27 15:05:56.594122] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2362513 has claimed it. 00:06:31.420 [2024-11-27 15:05:56.594162] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:31.984 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2362527) - No such process 00:06:31.984 ERROR: process (pid: 2362527) is no longer running 00:06:31.984 15:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.984 15:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:31.984 15:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:31.984 15:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:31.984 15:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:31.985 15:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:31.985 15:05:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:31.985 15:05:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:31.985 15:05:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:31.985 15:05:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:31.985 15:05:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2362513 00:06:31.985 15:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2362513 ']' 00:06:31.985 15:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2362513 00:06:31.985 15:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:31.985 15:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.985 15:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2362513 00:06:31.985 15:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.985 15:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.985 15:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2362513' 00:06:31.985 killing process with pid 2362513 00:06:31.985 15:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2362513 00:06:31.985 15:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2362513 00:06:32.244 00:06:32.244 real 0m1.412s 00:06:32.244 user 0m3.920s 00:06:32.244 sys 0m0.417s 00:06:32.244 15:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.244 15:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.244 ************************************ 00:06:32.244 END TEST locking_overlapped_coremask 00:06:32.244 ************************************ 00:06:32.244 15:05:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:32.244 15:05:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.244 15:05:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.244 15:05:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.502 ************************************ 00:06:32.502 START TEST locking_overlapped_coremask_via_rpc 00:06:32.502 ************************************ 00:06:32.502 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:32.502 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2362817 00:06:32.502 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2362817 /var/tmp/spdk.sock 00:06:32.502 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:32.502 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2362817 ']' 00:06:32.502 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.502 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.502 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.502 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.502 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.502 [2024-11-27 15:05:57.633059] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:32.502 [2024-11-27 15:05:57.633131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2362817 ] 00:06:32.502 [2024-11-27 15:05:57.705030] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.502 [2024-11-27 15:05:57.705059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.502 [2024-11-27 15:05:57.749752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.502 [2024-11-27 15:05:57.749768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.502 [2024-11-27 15:05:57.749770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.760 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.760 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:32.760 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2362827 00:06:32.760 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2362827 /var/tmp/spdk2.sock 00:06:32.760 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:32.760 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2362827 ']' 00:06:32.760 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.760 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.760 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.760 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.760 15:05:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.760 [2024-11-27 15:05:57.992277] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:32.760 [2024-11-27 15:05:57.992334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2362827 ] 00:06:32.760 [2024-11-27 15:05:58.093160] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.760 [2024-11-27 15:05:58.093195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.019 [2024-11-27 15:05:58.180669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.019 [2024-11-27 15:05:58.180783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.019 [2024-11-27 15:05:58.180785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.586 [2024-11-27 15:05:58.859672] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2362817 has claimed it. 00:06:33.586 request: 00:06:33.586 { 00:06:33.586 "method": "framework_enable_cpumask_locks", 00:06:33.586 "req_id": 1 00:06:33.586 } 00:06:33.586 Got JSON-RPC error response 00:06:33.586 response: 00:06:33.586 { 00:06:33.586 "code": -32603, 00:06:33.586 "message": "Failed to claim CPU core: 2" 00:06:33.586 } 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2362817 /var/tmp/spdk.sock 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2362817 ']' 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.586 15:05:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.844 15:05:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.844 15:05:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:33.844 15:05:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2362827 /var/tmp/spdk2.sock 00:06:33.844 15:05:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2362827 ']' 00:06:33.844 15:05:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.844 15:05:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.844 15:05:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.844 15:05:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.845 15:05:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.112 15:05:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.112 15:05:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:34.112 15:05:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:34.112 15:05:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:34.112 15:05:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:34.112 15:05:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:34.112 00:06:34.112 real 0m1.664s 00:06:34.112 user 0m0.764s 00:06:34.112 sys 0m0.173s 00:06:34.112 15:05:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.112 15:05:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.112 ************************************ 00:06:34.112 END TEST locking_overlapped_coremask_via_rpc 00:06:34.112 ************************************ 00:06:34.112 15:05:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:34.112 15:05:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2362817 ]] 00:06:34.112 15:05:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2362817 00:06:34.112 15:05:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2362817 ']' 00:06:34.112 15:05:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2362817 00:06:34.112 15:05:59 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:34.112 15:05:59 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.112 15:05:59 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2362817 00:06:34.112 15:05:59 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.112 15:05:59 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.112 15:05:59 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2362817' 00:06:34.112 killing process with pid 2362817 00:06:34.112 15:05:59 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2362817 00:06:34.112 15:05:59 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2362817 00:06:34.376 15:05:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2362827 ]] 00:06:34.376 15:05:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2362827 00:06:34.376 15:05:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2362827 ']' 00:06:34.376 15:05:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2362827 00:06:34.376 15:05:59 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:34.376 15:05:59 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.376 15:05:59 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2362827 00:06:34.635 15:05:59 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:34.635 15:05:59 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:34.635 15:05:59 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2362827' 00:06:34.635 killing process with pid 2362827 00:06:34.635 15:05:59 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2362827 00:06:34.635 15:05:59 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2362827 00:06:34.893 15:06:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:34.893 15:06:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:34.893 15:06:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2362817 ]] 00:06:34.893 15:06:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2362817 00:06:34.893 15:06:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2362817 ']' 00:06:34.893 15:06:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2362817 00:06:34.893 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2362817) - No such process 00:06:34.893 15:06:00 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2362817 is not found' 00:06:34.893 Process with pid 2362817 is not found 00:06:34.893 15:06:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2362827 ]] 00:06:34.893 15:06:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2362827 00:06:34.893 15:06:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2362827 ']' 00:06:34.893 15:06:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2362827 00:06:34.893 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2362827) - No such process 00:06:34.893 15:06:00 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2362827 is not found' 00:06:34.893 Process with pid 2362827 is not found 00:06:34.893 15:06:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:34.893 00:06:34.893 real 0m15.260s 00:06:34.893 user 0m25.561s 00:06:34.893 sys 0m5.943s 00:06:34.893 15:06:00 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.893 15:06:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.893 ************************************ 00:06:34.893 END TEST cpu_locks 00:06:34.893 ************************************ 00:06:34.893 00:06:34.893 real 0m39.918s 00:06:34.893 user 1m13.868s 00:06:34.893 sys 0m10.233s 00:06:34.893 15:06:00 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.893 15:06:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.893 ************************************ 00:06:34.893 END TEST event 00:06:34.893 ************************************ 00:06:34.893 15:06:00 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:34.893 15:06:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.893 15:06:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.893 15:06:00 -- common/autotest_common.sh@10 -- # set +x 00:06:34.893 ************************************ 00:06:34.893 START TEST thread 00:06:34.893 ************************************ 00:06:34.893 15:06:00 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:35.152 * Looking for test storage... 00:06:35.152 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:35.152 15:06:00 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:35.152 15:06:00 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:35.152 15:06:00 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:35.152 15:06:00 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:35.152 15:06:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.152 15:06:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.152 15:06:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.152 15:06:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.152 15:06:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.152 15:06:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.152 15:06:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.152 15:06:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.152 15:06:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.152 15:06:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.152 15:06:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.152 15:06:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:35.152 15:06:00 thread -- scripts/common.sh@345 -- # : 1 00:06:35.152 15:06:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.152 15:06:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.152 15:06:00 thread -- scripts/common.sh@365 -- # decimal 1 00:06:35.152 15:06:00 thread -- scripts/common.sh@353 -- # local d=1 00:06:35.152 15:06:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.152 15:06:00 thread -- scripts/common.sh@355 -- # echo 1 00:06:35.152 15:06:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.152 15:06:00 thread -- scripts/common.sh@366 -- # decimal 2 00:06:35.152 15:06:00 thread -- scripts/common.sh@353 -- # local d=2 00:06:35.152 15:06:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.152 15:06:00 thread -- scripts/common.sh@355 -- # echo 2 00:06:35.152 15:06:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.152 15:06:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.152 15:06:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.152 15:06:00 thread -- scripts/common.sh@368 -- # return 0 00:06:35.152 15:06:00 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.152 15:06:00 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:35.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.152 --rc genhtml_branch_coverage=1 00:06:35.152 --rc genhtml_function_coverage=1 00:06:35.152 --rc genhtml_legend=1 00:06:35.152 --rc geninfo_all_blocks=1 00:06:35.152 --rc geninfo_unexecuted_blocks=1 00:06:35.152 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:35.152 ' 00:06:35.152 15:06:00 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:35.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.152 --rc genhtml_branch_coverage=1 00:06:35.152 --rc genhtml_function_coverage=1 00:06:35.152 --rc genhtml_legend=1 00:06:35.152 --rc geninfo_all_blocks=1 00:06:35.152 --rc geninfo_unexecuted_blocks=1 00:06:35.152 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:35.152 ' 00:06:35.152 15:06:00 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:35.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.152 --rc genhtml_branch_coverage=1 00:06:35.152 --rc genhtml_function_coverage=1 00:06:35.152 --rc genhtml_legend=1 00:06:35.152 --rc geninfo_all_blocks=1 00:06:35.152 --rc geninfo_unexecuted_blocks=1 00:06:35.152 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:35.152 ' 00:06:35.152 15:06:00 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:35.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.152 --rc genhtml_branch_coverage=1 00:06:35.152 --rc genhtml_function_coverage=1 00:06:35.152 --rc genhtml_legend=1 00:06:35.152 --rc geninfo_all_blocks=1 00:06:35.152 --rc geninfo_unexecuted_blocks=1 00:06:35.152 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:35.152 ' 00:06:35.152 15:06:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:35.152 15:06:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:35.152 15:06:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.152 15:06:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.152 ************************************ 00:06:35.152 START TEST thread_poller_perf 00:06:35.152 ************************************ 00:06:35.152 15:06:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:35.152 [2024-11-27 15:06:00.407872] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:35.152 [2024-11-27 15:06:00.407915] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2363476 ] 00:06:35.152 [2024-11-27 15:06:00.476294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.411 [2024-11-27 15:06:00.517909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.411 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:36.346 [2024-11-27T14:06:01.686Z] ====================================== 00:06:36.346 [2024-11-27T14:06:01.686Z] busy:2504823528 (cyc) 00:06:36.346 [2024-11-27T14:06:01.686Z] total_run_count: 831000 00:06:36.346 [2024-11-27T14:06:01.686Z] tsc_hz: 2500000000 (cyc) 00:06:36.346 [2024-11-27T14:06:01.686Z] ====================================== 00:06:36.346 [2024-11-27T14:06:01.686Z] poller_cost: 3014 (cyc), 1205 (nsec) 00:06:36.346 00:06:36.346 real 0m1.159s 00:06:36.346 user 0m1.082s 00:06:36.346 sys 0m0.073s 00:06:36.346 15:06:01 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.346 15:06:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.346 ************************************ 00:06:36.346 END TEST thread_poller_perf 00:06:36.346 ************************************ 00:06:36.346 15:06:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:36.346 15:06:01 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:36.346 15:06:01 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.346 15:06:01 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.346 ************************************ 00:06:36.346 START TEST thread_poller_perf 00:06:36.346 ************************************ 00:06:36.346 15:06:01 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:36.346 [2024-11-27 15:06:01.649657] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:36.346 [2024-11-27 15:06:01.649740] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2363691 ] 00:06:36.606 [2024-11-27 15:06:01.725577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.606 [2024-11-27 15:06:01.765309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.606 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:37.543 [2024-11-27T14:06:02.883Z] ====================================== 00:06:37.543 [2024-11-27T14:06:02.883Z] busy:2501377304 (cyc) 00:06:37.543 [2024-11-27T14:06:02.883Z] total_run_count: 12878000 00:06:37.543 [2024-11-27T14:06:02.883Z] tsc_hz: 2500000000 (cyc) 00:06:37.543 [2024-11-27T14:06:02.883Z] ====================================== 00:06:37.543 [2024-11-27T14:06:02.883Z] poller_cost: 194 (cyc), 77 (nsec) 00:06:37.543 00:06:37.543 real 0m1.170s 00:06:37.543 user 0m1.078s 00:06:37.543 sys 0m0.088s 00:06:37.543 15:06:02 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.543 15:06:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:37.543 ************************************ 00:06:37.543 END TEST thread_poller_perf 00:06:37.543 ************************************ 00:06:37.543 15:06:02 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:37.543 15:06:02 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:37.543 15:06:02 thread -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.543 15:06:02 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.543 15:06:02 thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.802 ************************************ 00:06:37.802 START TEST thread_spdk_lock 00:06:37.802 ************************************ 00:06:37.802 15:06:02 thread.thread_spdk_lock -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:37.802 [2024-11-27 15:06:02.903119] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:37.802 [2024-11-27 15:06:02.903203] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2363904 ] 00:06:37.802 [2024-11-27 15:06:02.979477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.802 [2024-11-27 15:06:03.022026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.802 [2024-11-27 15:06:03.022029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.367 [2024-11-27 15:06:03.516507] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:38.367 [2024-11-27 15:06:03.516542] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3112:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:38.367 [2024-11-27 15:06:03.516553] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x14dbc00 00:06:38.367 [2024-11-27 15:06:03.517298] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:38.367 [2024-11-27 15:06:03.517402] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:38.367 [2024-11-27 15:06:03.517419] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:38.367 Starting test contend 00:06:38.367 Worker Delay Wait us Hold us Total us 00:06:38.367 0 3 175284 185609 360893 00:06:38.367 1 5 89051 287692 376744 00:06:38.367 PASS test contend 00:06:38.367 Starting test hold_by_poller 00:06:38.367 PASS test hold_by_poller 00:06:38.367 Starting test hold_by_message 00:06:38.367 PASS test hold_by_message 00:06:38.367 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:06:38.367 100014 assertions passed 00:06:38.367 0 assertions failed 00:06:38.367 00:06:38.367 real 0m0.668s 00:06:38.367 user 0m1.077s 00:06:38.367 sys 0m0.082s 00:06:38.367 15:06:03 thread.thread_spdk_lock -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.367 15:06:03 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:06:38.367 ************************************ 00:06:38.367 END TEST thread_spdk_lock 00:06:38.367 ************************************ 00:06:38.367 00:06:38.367 real 0m3.398s 00:06:38.367 user 0m3.419s 00:06:38.367 sys 0m0.498s 00:06:38.367 15:06:03 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.367 15:06:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.367 ************************************ 00:06:38.367 END TEST thread 00:06:38.367 ************************************ 00:06:38.367 15:06:03 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:38.367 15:06:03 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:38.367 15:06:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.367 15:06:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.367 15:06:03 -- common/autotest_common.sh@10 -- # set +x 00:06:38.367 ************************************ 00:06:38.367 START TEST app_cmdline 00:06:38.367 ************************************ 00:06:38.367 15:06:03 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:38.625 * Looking for test storage... 00:06:38.626 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:38.626 15:06:03 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.626 15:06:03 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.626 15:06:03 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.626 15:06:03 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.626 15:06:03 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:38.626 15:06:03 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.626 15:06:03 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.626 --rc genhtml_branch_coverage=1 00:06:38.626 --rc genhtml_function_coverage=1 00:06:38.626 --rc genhtml_legend=1 00:06:38.626 --rc geninfo_all_blocks=1 00:06:38.626 --rc geninfo_unexecuted_blocks=1 00:06:38.626 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:38.626 ' 00:06:38.626 15:06:03 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.626 --rc genhtml_branch_coverage=1 00:06:38.626 --rc genhtml_function_coverage=1 00:06:38.626 --rc genhtml_legend=1 00:06:38.626 --rc geninfo_all_blocks=1 00:06:38.626 --rc geninfo_unexecuted_blocks=1 00:06:38.626 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:38.626 ' 00:06:38.626 15:06:03 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.626 --rc genhtml_branch_coverage=1 00:06:38.626 --rc genhtml_function_coverage=1 00:06:38.626 --rc genhtml_legend=1 00:06:38.626 --rc geninfo_all_blocks=1 00:06:38.626 --rc geninfo_unexecuted_blocks=1 00:06:38.626 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:38.626 ' 00:06:38.626 15:06:03 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.626 --rc genhtml_branch_coverage=1 00:06:38.626 --rc genhtml_function_coverage=1 00:06:38.626 --rc genhtml_legend=1 00:06:38.626 --rc geninfo_all_blocks=1 00:06:38.626 --rc geninfo_unexecuted_blocks=1 00:06:38.626 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:38.626 ' 00:06:38.626 15:06:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:38.626 15:06:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2364233 00:06:38.626 15:06:03 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:38.626 15:06:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2364233 00:06:38.626 15:06:03 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2364233 ']' 00:06:38.626 15:06:03 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.626 15:06:03 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.626 15:06:03 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.626 15:06:03 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.626 15:06:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:38.626 [2024-11-27 15:06:03.900478] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:38.626 [2024-11-27 15:06:03.900555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364233 ] 00:06:38.885 [2024-11-27 15:06:03.971859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.885 [2024-11-27 15:06:04.014456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.885 15:06:04 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.885 15:06:04 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:38.885 15:06:04 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:39.144 { 00:06:39.144 "version": "SPDK v25.01-pre git sha1 2e10c84c8", 00:06:39.144 "fields": { 00:06:39.144 "major": 25, 00:06:39.144 "minor": 1, 00:06:39.144 "patch": 0, 00:06:39.144 "suffix": "-pre", 00:06:39.144 "commit": "2e10c84c8" 00:06:39.144 } 00:06:39.144 } 00:06:39.144 15:06:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:39.144 15:06:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:39.144 15:06:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:39.144 15:06:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:39.144 15:06:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:39.144 15:06:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:39.144 15:06:04 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.144 15:06:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:39.144 15:06:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:39.144 15:06:04 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.144 15:06:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:39.144 15:06:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:39.144 15:06:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.144 15:06:04 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:39.144 15:06:04 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.144 15:06:04 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:39.144 15:06:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.144 15:06:04 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:39.144 15:06:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.144 15:06:04 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:39.144 15:06:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.144 15:06:04 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:39.144 15:06:04 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:39.144 15:06:04 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.404 request: 00:06:39.404 { 00:06:39.404 "method": "env_dpdk_get_mem_stats", 00:06:39.404 "req_id": 1 00:06:39.404 } 00:06:39.404 Got JSON-RPC error response 00:06:39.404 response: 00:06:39.404 { 00:06:39.404 "code": -32601, 00:06:39.404 "message": "Method not found" 00:06:39.404 } 00:06:39.404 15:06:04 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:39.404 15:06:04 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:39.404 15:06:04 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:39.404 15:06:04 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:39.404 15:06:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2364233 00:06:39.404 15:06:04 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2364233 ']' 00:06:39.404 15:06:04 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2364233 00:06:39.404 15:06:04 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:39.404 15:06:04 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.404 15:06:04 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2364233 00:06:39.404 15:06:04 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.404 15:06:04 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.404 15:06:04 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2364233' 00:06:39.404 killing process with pid 2364233 00:06:39.404 15:06:04 app_cmdline -- common/autotest_common.sh@973 -- # kill 2364233 00:06:39.404 15:06:04 app_cmdline -- common/autotest_common.sh@978 -- # wait 2364233 00:06:39.972 00:06:39.972 real 0m1.330s 00:06:39.972 user 0m1.480s 00:06:39.972 sys 0m0.525s 00:06:39.972 15:06:05 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.972 15:06:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:39.972 ************************************ 00:06:39.972 END TEST app_cmdline 00:06:39.972 ************************************ 00:06:39.972 15:06:05 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:39.972 15:06:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.972 15:06:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.972 15:06:05 -- common/autotest_common.sh@10 -- # set +x 00:06:39.972 ************************************ 00:06:39.972 START TEST version 00:06:39.972 ************************************ 00:06:39.972 15:06:05 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:39.972 * Looking for test storage... 00:06:39.972 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:39.972 15:06:05 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:39.972 15:06:05 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:39.972 15:06:05 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:39.972 15:06:05 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:39.972 15:06:05 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.972 15:06:05 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.972 15:06:05 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.972 15:06:05 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.972 15:06:05 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.972 15:06:05 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.972 15:06:05 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.972 15:06:05 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.972 15:06:05 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.972 15:06:05 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.972 15:06:05 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.972 15:06:05 version -- scripts/common.sh@344 -- # case "$op" in 00:06:39.972 15:06:05 version -- scripts/common.sh@345 -- # : 1 00:06:39.972 15:06:05 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.972 15:06:05 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.972 15:06:05 version -- scripts/common.sh@365 -- # decimal 1 00:06:39.972 15:06:05 version -- scripts/common.sh@353 -- # local d=1 00:06:39.972 15:06:05 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.972 15:06:05 version -- scripts/common.sh@355 -- # echo 1 00:06:39.972 15:06:05 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.972 15:06:05 version -- scripts/common.sh@366 -- # decimal 2 00:06:39.972 15:06:05 version -- scripts/common.sh@353 -- # local d=2 00:06:39.972 15:06:05 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.972 15:06:05 version -- scripts/common.sh@355 -- # echo 2 00:06:39.972 15:06:05 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.972 15:06:05 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.972 15:06:05 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.972 15:06:05 version -- scripts/common.sh@368 -- # return 0 00:06:39.972 15:06:05 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.972 15:06:05 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:39.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.972 --rc genhtml_branch_coverage=1 00:06:39.972 --rc genhtml_function_coverage=1 00:06:39.972 --rc genhtml_legend=1 00:06:39.972 --rc geninfo_all_blocks=1 00:06:39.972 --rc geninfo_unexecuted_blocks=1 00:06:39.972 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.972 ' 00:06:39.972 15:06:05 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:39.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.972 --rc genhtml_branch_coverage=1 00:06:39.972 --rc genhtml_function_coverage=1 00:06:39.972 --rc genhtml_legend=1 00:06:39.972 --rc geninfo_all_blocks=1 00:06:39.972 --rc geninfo_unexecuted_blocks=1 00:06:39.972 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.972 ' 00:06:39.972 15:06:05 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:39.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.972 --rc genhtml_branch_coverage=1 00:06:39.972 --rc genhtml_function_coverage=1 00:06:39.972 --rc genhtml_legend=1 00:06:39.972 --rc geninfo_all_blocks=1 00:06:39.972 --rc geninfo_unexecuted_blocks=1 00:06:39.972 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.972 ' 00:06:39.972 15:06:05 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:39.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.972 --rc genhtml_branch_coverage=1 00:06:39.972 --rc genhtml_function_coverage=1 00:06:39.972 --rc genhtml_legend=1 00:06:39.972 --rc geninfo_all_blocks=1 00:06:39.972 --rc geninfo_unexecuted_blocks=1 00:06:39.972 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.972 ' 00:06:39.972 15:06:05 version -- app/version.sh@17 -- # get_header_version major 00:06:39.972 15:06:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:39.972 15:06:05 version -- app/version.sh@14 -- # cut -f2 00:06:39.972 15:06:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:39.972 15:06:05 version -- app/version.sh@17 -- # major=25 00:06:39.972 15:06:05 version -- app/version.sh@18 -- # get_header_version minor 00:06:39.972 15:06:05 version -- app/version.sh@14 -- # cut -f2 00:06:39.972 15:06:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:39.972 15:06:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:39.972 15:06:05 version -- app/version.sh@18 -- # minor=1 00:06:39.972 15:06:05 version -- app/version.sh@19 -- # get_header_version patch 00:06:39.972 15:06:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:39.972 15:06:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:39.972 15:06:05 version -- app/version.sh@14 -- # cut -f2 00:06:40.231 15:06:05 version -- app/version.sh@19 -- # patch=0 00:06:40.231 15:06:05 version -- app/version.sh@20 -- # get_header_version suffix 00:06:40.231 15:06:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:40.231 15:06:05 version -- app/version.sh@14 -- # cut -f2 00:06:40.231 15:06:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.231 15:06:05 version -- app/version.sh@20 -- # suffix=-pre 00:06:40.231 15:06:05 version -- app/version.sh@22 -- # version=25.1 00:06:40.231 15:06:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:40.231 15:06:05 version -- app/version.sh@28 -- # version=25.1rc0 00:06:40.231 15:06:05 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:40.231 15:06:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:40.231 15:06:05 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:40.231 15:06:05 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:40.231 00:06:40.231 real 0m0.270s 00:06:40.231 user 0m0.154s 00:06:40.231 sys 0m0.167s 00:06:40.231 15:06:05 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.231 15:06:05 version -- common/autotest_common.sh@10 -- # set +x 00:06:40.231 ************************************ 00:06:40.231 END TEST version 00:06:40.231 ************************************ 00:06:40.231 15:06:05 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:40.231 15:06:05 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:40.231 15:06:05 -- spdk/autotest.sh@194 -- # uname -s 00:06:40.231 15:06:05 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:40.231 15:06:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:40.231 15:06:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:40.231 15:06:05 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:40.231 15:06:05 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:40.231 15:06:05 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:40.231 15:06:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:40.231 15:06:05 -- common/autotest_common.sh@10 -- # set +x 00:06:40.231 15:06:05 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:40.231 15:06:05 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:40.231 15:06:05 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:06:40.231 15:06:05 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:06:40.231 15:06:05 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:06:40.231 15:06:05 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:06:40.231 15:06:05 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:06:40.231 15:06:05 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:06:40.231 15:06:05 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:06:40.231 15:06:05 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:06:40.231 15:06:05 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:06:40.231 15:06:05 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:06:40.231 15:06:05 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:06:40.231 15:06:05 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:06:40.231 15:06:05 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:06:40.231 15:06:05 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:06:40.231 15:06:05 -- spdk/autotest.sh@374 -- # [[ 1 -eq 1 ]] 00:06:40.231 15:06:05 -- spdk/autotest.sh@375 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:40.231 15:06:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.231 15:06:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.231 15:06:05 -- common/autotest_common.sh@10 -- # set +x 00:06:40.231 ************************************ 00:06:40.231 START TEST llvm_fuzz 00:06:40.231 ************************************ 00:06:40.231 15:06:05 llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:40.490 * Looking for test storage... 00:06:40.490 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:40.490 15:06:05 llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.490 15:06:05 llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.490 15:06:05 llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.490 15:06:05 llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.490 15:06:05 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:40.490 15:06:05 llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.490 15:06:05 llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.490 --rc genhtml_branch_coverage=1 00:06:40.490 --rc genhtml_function_coverage=1 00:06:40.490 --rc genhtml_legend=1 00:06:40.490 --rc geninfo_all_blocks=1 00:06:40.490 --rc geninfo_unexecuted_blocks=1 00:06:40.490 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:40.490 ' 00:06:40.490 15:06:05 llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.490 --rc genhtml_branch_coverage=1 00:06:40.490 --rc genhtml_function_coverage=1 00:06:40.490 --rc genhtml_legend=1 00:06:40.490 --rc geninfo_all_blocks=1 00:06:40.490 --rc geninfo_unexecuted_blocks=1 00:06:40.490 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:40.490 ' 00:06:40.490 15:06:05 llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.490 --rc genhtml_branch_coverage=1 00:06:40.490 --rc genhtml_function_coverage=1 00:06:40.490 --rc genhtml_legend=1 00:06:40.490 --rc geninfo_all_blocks=1 00:06:40.490 --rc geninfo_unexecuted_blocks=1 00:06:40.490 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:40.490 ' 00:06:40.490 15:06:05 llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.490 --rc genhtml_branch_coverage=1 00:06:40.490 --rc genhtml_function_coverage=1 00:06:40.490 --rc genhtml_legend=1 00:06:40.490 --rc geninfo_all_blocks=1 00:06:40.490 --rc geninfo_unexecuted_blocks=1 00:06:40.490 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:40.490 ' 00:06:40.490 15:06:05 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:40.490 15:06:05 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:40.490 15:06:05 llvm_fuzz -- common/autotest_common.sh@550 -- # fuzzers=() 00:06:40.490 15:06:05 llvm_fuzz -- common/autotest_common.sh@550 -- # local fuzzers 00:06:40.490 15:06:05 llvm_fuzz -- common/autotest_common.sh@552 -- # [[ -n '' ]] 00:06:40.490 15:06:05 llvm_fuzz -- common/autotest_common.sh@555 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:40.490 15:06:05 llvm_fuzz -- common/autotest_common.sh@556 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:40.490 15:06:05 llvm_fuzz -- common/autotest_common.sh@559 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:40.490 15:06:05 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:40.490 15:06:05 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:40.490 15:06:05 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:40.490 15:06:05 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:40.490 15:06:05 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:40.490 15:06:05 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:40.490 15:06:05 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:40.490 15:06:05 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:40.491 15:06:05 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:40.491 15:06:05 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.491 15:06:05 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.491 15:06:05 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:40.491 ************************************ 00:06:40.491 START TEST nvmf_llvm_fuzz 00:06:40.491 ************************************ 00:06:40.491 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:40.491 * Looking for test storage... 00:06:40.491 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:40.491 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.491 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.491 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.752 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.753 --rc genhtml_branch_coverage=1 00:06:40.753 --rc genhtml_function_coverage=1 00:06:40.753 --rc genhtml_legend=1 00:06:40.753 --rc geninfo_all_blocks=1 00:06:40.753 --rc geninfo_unexecuted_blocks=1 00:06:40.753 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:40.753 ' 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.753 --rc genhtml_branch_coverage=1 00:06:40.753 --rc genhtml_function_coverage=1 00:06:40.753 --rc genhtml_legend=1 00:06:40.753 --rc geninfo_all_blocks=1 00:06:40.753 --rc geninfo_unexecuted_blocks=1 00:06:40.753 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:40.753 ' 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.753 --rc genhtml_branch_coverage=1 00:06:40.753 --rc genhtml_function_coverage=1 00:06:40.753 --rc genhtml_legend=1 00:06:40.753 --rc geninfo_all_blocks=1 00:06:40.753 --rc geninfo_unexecuted_blocks=1 00:06:40.753 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:40.753 ' 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.753 --rc genhtml_branch_coverage=1 00:06:40.753 --rc genhtml_function_coverage=1 00:06:40.753 --rc genhtml_legend=1 00:06:40.753 --rc geninfo_all_blocks=1 00:06:40.753 --rc geninfo_unexecuted_blocks=1 00:06:40.753 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:40.753 ' 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:40.753 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:40.754 #define SPDK_CONFIG_H 00:06:40.754 #define SPDK_CONFIG_AIO_FSDEV 1 00:06:40.754 #define SPDK_CONFIG_APPS 1 00:06:40.754 #define SPDK_CONFIG_ARCH native 00:06:40.754 #undef SPDK_CONFIG_ASAN 00:06:40.754 #undef SPDK_CONFIG_AVAHI 00:06:40.754 #undef SPDK_CONFIG_CET 00:06:40.754 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:06:40.754 #define SPDK_CONFIG_COVERAGE 1 00:06:40.754 #define SPDK_CONFIG_CROSS_PREFIX 00:06:40.754 #undef SPDK_CONFIG_CRYPTO 00:06:40.754 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:40.754 #undef SPDK_CONFIG_CUSTOMOCF 00:06:40.754 #undef SPDK_CONFIG_DAOS 00:06:40.754 #define SPDK_CONFIG_DAOS_DIR 00:06:40.754 #define SPDK_CONFIG_DEBUG 1 00:06:40.754 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:40.754 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:40.754 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:40.754 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:40.754 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:40.754 #undef SPDK_CONFIG_DPDK_UADK 00:06:40.754 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:40.754 #define SPDK_CONFIG_EXAMPLES 1 00:06:40.754 #undef SPDK_CONFIG_FC 00:06:40.754 #define SPDK_CONFIG_FC_PATH 00:06:40.754 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:40.754 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:40.754 #define SPDK_CONFIG_FSDEV 1 00:06:40.754 #undef SPDK_CONFIG_FUSE 00:06:40.754 #define SPDK_CONFIG_FUZZER 1 00:06:40.754 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:40.754 #undef SPDK_CONFIG_GOLANG 00:06:40.754 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:40.754 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:40.754 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:40.754 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:40.754 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:40.754 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:40.754 #undef SPDK_CONFIG_HAVE_LZ4 00:06:40.754 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:06:40.754 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:06:40.754 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:40.754 #define SPDK_CONFIG_IDXD 1 00:06:40.754 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:40.754 #undef SPDK_CONFIG_IPSEC_MB 00:06:40.754 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:40.754 #define SPDK_CONFIG_ISAL 1 00:06:40.754 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:40.754 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:40.754 #define SPDK_CONFIG_LIBDIR 00:06:40.754 #undef SPDK_CONFIG_LTO 00:06:40.754 #define SPDK_CONFIG_MAX_LCORES 128 00:06:40.754 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:06:40.754 #define SPDK_CONFIG_NVME_CUSE 1 00:06:40.754 #undef SPDK_CONFIG_OCF 00:06:40.754 #define SPDK_CONFIG_OCF_PATH 00:06:40.754 #define SPDK_CONFIG_OPENSSL_PATH 00:06:40.754 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:40.754 #define SPDK_CONFIG_PGO_DIR 00:06:40.754 #undef SPDK_CONFIG_PGO_USE 00:06:40.754 #define SPDK_CONFIG_PREFIX /usr/local 00:06:40.754 #undef SPDK_CONFIG_RAID5F 00:06:40.754 #undef SPDK_CONFIG_RBD 00:06:40.754 #define SPDK_CONFIG_RDMA 1 00:06:40.754 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:40.754 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:40.754 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:40.754 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:40.754 #undef SPDK_CONFIG_SHARED 00:06:40.754 #undef SPDK_CONFIG_SMA 00:06:40.754 #define SPDK_CONFIG_TESTS 1 00:06:40.754 #undef SPDK_CONFIG_TSAN 00:06:40.754 #define SPDK_CONFIG_UBLK 1 00:06:40.754 #define SPDK_CONFIG_UBSAN 1 00:06:40.754 #undef SPDK_CONFIG_UNIT_TESTS 00:06:40.754 #undef SPDK_CONFIG_URING 00:06:40.754 #define SPDK_CONFIG_URING_PATH 00:06:40.754 #undef SPDK_CONFIG_URING_ZNS 00:06:40.754 #undef SPDK_CONFIG_USDT 00:06:40.754 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:40.754 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:40.754 #define SPDK_CONFIG_VFIO_USER 1 00:06:40.754 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:40.754 #define SPDK_CONFIG_VHOST 1 00:06:40.754 #define SPDK_CONFIG_VIRTIO 1 00:06:40.754 #undef SPDK_CONFIG_VTUNE 00:06:40.754 #define SPDK_CONFIG_VTUNE_DIR 00:06:40.754 #define SPDK_CONFIG_WERROR 1 00:06:40.754 #define SPDK_CONFIG_WPDK_DIR 00:06:40.754 #undef SPDK_CONFIG_XNVME 00:06:40.754 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:40.754 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:40.755 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:06:40.756 15:06:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 2365091 ]] 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 2365091 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:06:40.756 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.CMjQ6T 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.CMjQ6T/tests/nvmf /tmp/spdk.CMjQ6T 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=52923850752 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730607104 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=8806756352 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30860537856 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865301504 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=12340129792 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346122240 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5992448 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30863646720 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865305600 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=1658880 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=6173044736 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173057024 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:06:40.757 * Looking for test storage... 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=52923850752 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=11021348864 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:40.757 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # set -o errtrace 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1685 -- # true 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1687 -- # xtrace_fd 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:40.757 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:40.758 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:40.758 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:40.758 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:40.758 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:40.758 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.758 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.758 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:41.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.017 --rc genhtml_branch_coverage=1 00:06:41.017 --rc genhtml_function_coverage=1 00:06:41.017 --rc genhtml_legend=1 00:06:41.017 --rc geninfo_all_blocks=1 00:06:41.017 --rc geninfo_unexecuted_blocks=1 00:06:41.017 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.017 ' 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:41.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.017 --rc genhtml_branch_coverage=1 00:06:41.017 --rc genhtml_function_coverage=1 00:06:41.017 --rc genhtml_legend=1 00:06:41.017 --rc geninfo_all_blocks=1 00:06:41.017 --rc geninfo_unexecuted_blocks=1 00:06:41.017 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.017 ' 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:41.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.017 --rc genhtml_branch_coverage=1 00:06:41.017 --rc genhtml_function_coverage=1 00:06:41.017 --rc genhtml_legend=1 00:06:41.017 --rc geninfo_all_blocks=1 00:06:41.017 --rc geninfo_unexecuted_blocks=1 00:06:41.017 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.017 ' 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:41.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.017 --rc genhtml_branch_coverage=1 00:06:41.017 --rc genhtml_function_coverage=1 00:06:41.017 --rc genhtml_legend=1 00:06:41.017 --rc geninfo_all_blocks=1 00:06:41.017 --rc geninfo_unexecuted_blocks=1 00:06:41.017 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.017 ' 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:41.017 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:41.018 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:41.018 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:41.018 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:41.018 15:06:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:41.018 [2024-11-27 15:06:06.215362] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:41.018 [2024-11-27 15:06:06.215431] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2365220 ] 00:06:41.276 [2024-11-27 15:06:06.472761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.276 [2024-11-27 15:06:06.527875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.276 [2024-11-27 15:06:06.587476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.276 [2024-11-27 15:06:06.603879] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:41.534 INFO: Running with entropic power schedule (0xFF, 100). 00:06:41.534 INFO: Seed: 3906469663 00:06:41.534 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:06:41.534 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:06:41.534 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:41.535 INFO: A corpus is not provided, starting from an empty corpus 00:06:41.535 #2 INITED exec/s: 0 rss: 65Mb 00:06:41.535 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:41.535 This may also happen if the target rejected all inputs we tried so far 00:06:41.535 [2024-11-27 15:06:06.652184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.535 [2024-11-27 15:06:06.652214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.793 NEW_FUNC[1/715]: 0x43bbc8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:41.793 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:41.794 #18 NEW cov: 12214 ft: 12218 corp: 2/75b lim: 320 exec/s: 0 rss: 73Mb L: 74/74 MS: 1 InsertRepeatedBytes- 00:06:41.794 [2024-11-27 15:06:06.983179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.794 [2024-11-27 15:06:06.983221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.794 NEW_FUNC[1/2]: 0x19761f8 in nvme_get_sgl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:159 00:06:41.794 NEW_FUNC[2/2]: 0x1976d68 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:06:41.794 #20 NEW cov: 12368 ft: 13179 corp: 3/156b lim: 320 exec/s: 0 rss: 73Mb L: 81/81 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:41.794 [2024-11-27 15:06:07.023103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.794 [2024-11-27 15:06:07.023128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.794 #21 NEW cov: 12374 ft: 13483 corp: 4/231b lim: 320 exec/s: 0 rss: 73Mb L: 75/81 MS: 1 InsertByte- 00:06:41.794 [2024-11-27 15:06:07.083300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.794 [2024-11-27 15:06:07.083327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.794 #22 NEW cov: 12459 ft: 13794 corp: 5/312b lim: 320 exec/s: 0 rss: 74Mb L: 81/81 MS: 1 ChangeBinInt- 00:06:42.053 [2024-11-27 15:06:07.143452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.053 [2024-11-27 15:06:07.143477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.053 #23 NEW cov: 12459 ft: 13934 corp: 6/386b lim: 320 exec/s: 0 rss: 74Mb L: 74/81 MS: 1 CopyPart- 00:06:42.053 [2024-11-27 15:06:07.183540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.053 [2024-11-27 15:06:07.183565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.053 #24 NEW cov: 12459 ft: 14022 corp: 7/460b lim: 320 exec/s: 0 rss: 74Mb L: 74/81 MS: 1 ChangeBit- 00:06:42.053 [2024-11-27 15:06:07.223687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.053 [2024-11-27 15:06:07.223712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.053 #25 NEW cov: 12459 ft: 14077 corp: 8/542b lim: 320 exec/s: 0 rss: 74Mb L: 82/82 MS: 1 InsertByte- 00:06:42.053 [2024-11-27 15:06:07.263845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.053 [2024-11-27 15:06:07.263870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.053 #26 NEW cov: 12459 ft: 14119 corp: 9/631b lim: 320 exec/s: 0 rss: 74Mb L: 89/89 MS: 1 CMP- DE: "\021\000\000\000\000\000\000\000"- 00:06:42.053 [2024-11-27 15:06:07.323964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.053 [2024-11-27 15:06:07.323988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.053 #30 NEW cov: 12459 ft: 14149 corp: 10/739b lim: 320 exec/s: 0 rss: 74Mb L: 108/108 MS: 4 EraseBytes-ChangeByte-ChangeBit-CrossOver- 00:06:42.053 [2024-11-27 15:06:07.384193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.053 [2024-11-27 15:06:07.384218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.311 #31 NEW cov: 12459 ft: 14174 corp: 11/852b lim: 320 exec/s: 0 rss: 74Mb L: 113/113 MS: 1 CrossOver- 00:06:42.311 [2024-11-27 15:06:07.424252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.311 [2024-11-27 15:06:07.424278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.311 #32 NEW cov: 12459 ft: 14180 corp: 12/934b lim: 320 exec/s: 0 rss: 74Mb L: 82/113 MS: 1 ChangeBinInt- 00:06:42.311 [2024-11-27 15:06:07.484475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.311 [2024-11-27 15:06:07.484500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.311 #33 NEW cov: 12459 ft: 14204 corp: 13/1016b lim: 320 exec/s: 0 rss: 74Mb L: 82/113 MS: 1 ChangeBinInt- 00:06:42.311 [2024-11-27 15:06:07.544624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.311 [2024-11-27 15:06:07.544650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.311 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:42.311 #34 NEW cov: 12482 ft: 14241 corp: 14/1124b lim: 320 exec/s: 0 rss: 74Mb L: 108/113 MS: 1 PersAutoDict- DE: "\021\000\000\000\000\000\000\000"- 00:06:42.311 [2024-11-27 15:06:07.604767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.311 [2024-11-27 15:06:07.604794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.311 #36 NEW cov: 12482 ft: 14329 corp: 15/1242b lim: 320 exec/s: 36 rss: 74Mb L: 118/118 MS: 2 EraseBytes-CrossOver- 00:06:42.570 [2024-11-27 15:06:07.664984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.570 [2024-11-27 15:06:07.665010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.570 #37 NEW cov: 12482 ft: 14375 corp: 16/1323b lim: 320 exec/s: 37 rss: 74Mb L: 81/118 MS: 1 ChangeBit- 00:06:42.570 [2024-11-27 15:06:07.705035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.570 [2024-11-27 15:06:07.705060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.570 #38 NEW cov: 12482 ft: 14416 corp: 17/1406b lim: 320 exec/s: 38 rss: 74Mb L: 83/118 MS: 1 InsertByte- 00:06:42.570 [2024-11-27 15:06:07.745031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.570 [2024-11-27 15:06:07.745056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.570 #39 NEW cov: 12482 ft: 14466 corp: 18/1495b lim: 320 exec/s: 39 rss: 74Mb L: 89/118 MS: 1 ChangeByte- 00:06:42.570 [2024-11-27 15:06:07.805315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:901 cdw10:00000000 cdw11:00000000 00:06:42.570 [2024-11-27 15:06:07.805341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.570 #40 NEW cov: 12489 ft: 14511 corp: 19/1603b lim: 320 exec/s: 40 rss: 74Mb L: 108/118 MS: 1 ChangeBinInt- 00:06:42.570 [2024-11-27 15:06:07.865559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.570 [2024-11-27 15:06:07.865584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.570 #41 NEW cov: 12489 ft: 14537 corp: 20/1685b lim: 320 exec/s: 41 rss: 75Mb L: 82/118 MS: 1 InsertByte- 00:06:42.570 [2024-11-27 15:06:07.905670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.570 [2024-11-27 15:06:07.905697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.830 #42 NEW cov: 12489 ft: 14593 corp: 21/1767b lim: 320 exec/s: 42 rss: 75Mb L: 82/118 MS: 1 ChangeByte- 00:06:42.830 [2024-11-27 15:06:07.945763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.830 [2024-11-27 15:06:07.945788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.830 #48 NEW cov: 12489 ft: 14657 corp: 22/1850b lim: 320 exec/s: 48 rss: 75Mb L: 83/118 MS: 1 ChangeBit- 00:06:42.830 [2024-11-27 15:06:08.005976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.830 [2024-11-27 15:06:08.006002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.830 #49 NEW cov: 12489 ft: 14659 corp: 23/1939b lim: 320 exec/s: 49 rss: 75Mb L: 89/118 MS: 1 PersAutoDict- DE: "\021\000\000\000\000\000\000\000"- 00:06:42.830 [2024-11-27 15:06:08.046078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.830 [2024-11-27 15:06:08.046104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.830 #50 NEW cov: 12489 ft: 14664 corp: 24/2022b lim: 320 exec/s: 50 rss: 75Mb L: 83/118 MS: 1 ChangeBit- 00:06:42.830 [2024-11-27 15:06:08.106222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.830 [2024-11-27 15:06:08.106247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.830 #51 NEW cov: 12489 ft: 14684 corp: 25/2104b lim: 320 exec/s: 51 rss: 75Mb L: 82/118 MS: 1 ChangeBit- 00:06:42.830 [2024-11-27 15:06:08.146335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.830 [2024-11-27 15:06:08.146359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.089 #52 NEW cov: 12489 ft: 14696 corp: 26/2186b lim: 320 exec/s: 52 rss: 75Mb L: 82/118 MS: 1 ChangeBinInt- 00:06:43.089 [2024-11-27 15:06:08.206486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.089 [2024-11-27 15:06:08.206511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.089 #53 NEW cov: 12489 ft: 14743 corp: 27/2267b lim: 320 exec/s: 53 rss: 75Mb L: 81/118 MS: 1 ChangeByte- 00:06:43.089 [2024-11-27 15:06:08.246653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.089 [2024-11-27 15:06:08.246679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.089 #54 NEW cov: 12489 ft: 14765 corp: 28/2349b lim: 320 exec/s: 54 rss: 75Mb L: 82/118 MS: 1 ChangeByte- 00:06:43.089 [2024-11-27 15:06:08.306740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.089 [2024-11-27 15:06:08.306764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.089 #55 NEW cov: 12489 ft: 14777 corp: 29/2467b lim: 320 exec/s: 55 rss: 75Mb L: 118/118 MS: 1 ShuffleBytes- 00:06:43.089 [2024-11-27 15:06:08.366963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.089 [2024-11-27 15:06:08.366988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.089 #56 NEW cov: 12489 ft: 14785 corp: 30/2549b lim: 320 exec/s: 56 rss: 75Mb L: 82/118 MS: 1 ShuffleBytes- 00:06:43.089 [2024-11-27 15:06:08.427119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.089 [2024-11-27 15:06:08.427144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.348 #57 NEW cov: 12489 ft: 14790 corp: 31/2624b lim: 320 exec/s: 57 rss: 75Mb L: 75/118 MS: 1 ChangeBinInt- 00:06:43.348 [2024-11-27 15:06:08.467255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.348 [2024-11-27 15:06:08.467281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.348 #58 NEW cov: 12489 ft: 14796 corp: 32/2706b lim: 320 exec/s: 58 rss: 75Mb L: 82/118 MS: 1 ChangeBinInt- 00:06:43.348 [2024-11-27 15:06:08.527449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.348 [2024-11-27 15:06:08.527474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.348 #59 NEW cov: 12489 ft: 14806 corp: 33/2788b lim: 320 exec/s: 59 rss: 75Mb L: 82/118 MS: 1 ChangeBinInt- 00:06:43.348 [2024-11-27 15:06:08.567507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.348 [2024-11-27 15:06:08.567532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.348 #60 NEW cov: 12489 ft: 14834 corp: 34/2871b lim: 320 exec/s: 60 rss: 75Mb L: 83/118 MS: 1 CopyPart- 00:06:43.348 [2024-11-27 15:06:08.607604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.348 [2024-11-27 15:06:08.607629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.348 #61 NEW cov: 12489 ft: 14840 corp: 35/2952b lim: 320 exec/s: 61 rss: 75Mb L: 81/118 MS: 1 ShuffleBytes- 00:06:43.348 [2024-11-27 15:06:08.647773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (91) qid:0 cid:4 nsid:91919191 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.348 [2024-11-27 15:06:08.647797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.607 #62 NEW cov: 12489 ft: 14863 corp: 36/3035b lim: 320 exec/s: 31 rss: 75Mb L: 83/118 MS: 1 PersAutoDict- DE: "\021\000\000\000\000\000\000\000"- 00:06:43.607 #62 DONE cov: 12489 ft: 14863 corp: 36/3035b lim: 320 exec/s: 31 rss: 75Mb 00:06:43.607 ###### Recommended dictionary. ###### 00:06:43.607 "\021\000\000\000\000\000\000\000" # Uses: 4 00:06:43.607 ###### End of recommended dictionary. ###### 00:06:43.607 Done 62 runs in 2 second(s) 00:06:43.607 15:06:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:43.607 15:06:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:43.607 15:06:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:43.607 15:06:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:43.607 15:06:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:43.607 15:06:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:43.607 15:06:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:43.607 15:06:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:43.607 15:06:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:43.607 15:06:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:43.607 15:06:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:43.607 15:06:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:43.607 15:06:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:43.607 15:06:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:43.607 15:06:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:43.607 15:06:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:43.608 15:06:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:43.608 15:06:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:43.608 15:06:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:43.608 [2024-11-27 15:06:08.830536] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:43.608 [2024-11-27 15:06:08.830596] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2365698 ] 00:06:43.866 [2024-11-27 15:06:09.011911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.866 [2024-11-27 15:06:09.043040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.866 [2024-11-27 15:06:09.102446] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.866 [2024-11-27 15:06:09.118822] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:43.866 INFO: Running with entropic power schedule (0xFF, 100). 00:06:43.866 INFO: Seed: 2128498203 00:06:43.866 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:06:43.866 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:06:43.866 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:43.866 INFO: A corpus is not provided, starting from an empty corpus 00:06:43.866 #2 INITED exec/s: 0 rss: 66Mb 00:06:43.866 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:43.866 This may also happen if the target rejected all inputs we tried so far 00:06:43.866 [2024-11-27 15:06:09.163937] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (272388) > buf size (4096) 00:06:43.866 [2024-11-27 15:06:09.164162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.866 [2024-11-27 15:06:09.164204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.382 NEW_FUNC[1/716]: 0x43c4c8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:44.382 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:44.382 #5 NEW cov: 12326 ft: 12313 corp: 2/7b lim: 30 exec/s: 0 rss: 73Mb L: 6/6 MS: 3 CopyPart-CMP-InsertByte- DE: "\000\000\000\000"- 00:06:44.382 [2024-11-27 15:06:09.504775] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (272388) > buf size (4096) 00:06:44.382 [2024-11-27 15:06:09.505008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.382 [2024-11-27 15:06:09.505039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.382 NEW_FUNC[1/1]: 0x19b6e08 in nvme_get_transport /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_transport.c:56 00:06:44.382 #6 NEW cov: 12448 ft: 12742 corp: 3/13b lim: 30 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 CopyPart- 00:06:44.382 [2024-11-27 15:06:09.564880] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x5b 00:06:44.382 [2024-11-27 15:06:09.565096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:000000c5 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.382 [2024-11-27 15:06:09.565126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.382 #8 NEW cov: 12460 ft: 12952 corp: 4/19b lim: 30 exec/s: 0 rss: 73Mb L: 6/6 MS: 2 EraseBytes-InsertByte- 00:06:44.382 [2024-11-27 15:06:09.625010] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (272388) > buf size (4096) 00:06:44.382 [2024-11-27 15:06:09.625223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.382 [2024-11-27 15:06:09.625250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.382 #9 NEW cov: 12545 ft: 13355 corp: 5/29b lim: 30 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 CMP- DE: "\003\000\000\000"- 00:06:44.382 [2024-11-27 15:06:09.665110] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xc55b 00:06:44.382 [2024-11-27 15:06:09.665324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.382 [2024-11-27 15:06:09.665349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.382 #10 NEW cov: 12545 ft: 13653 corp: 6/35b lim: 30 exec/s: 0 rss: 73Mb L: 6/10 MS: 1 ShuffleBytes- 00:06:44.641 [2024-11-27 15:06:09.725319] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (272388) > buf size (4096) 00:06:44.641 [2024-11-27 15:06:09.725538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.641 [2024-11-27 15:06:09.725564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.641 #11 NEW cov: 12545 ft: 13740 corp: 7/42b lim: 30 exec/s: 0 rss: 73Mb L: 7/10 MS: 1 InsertByte- 00:06:44.641 [2024-11-27 15:06:09.765419] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xc55b 00:06:44.641 [2024-11-27 15:06:09.765666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.641 [2024-11-27 15:06:09.765690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.641 #12 NEW cov: 12545 ft: 13802 corp: 8/48b lim: 30 exec/s: 0 rss: 73Mb L: 6/10 MS: 1 ChangeBit- 00:06:44.641 [2024-11-27 15:06:09.825566] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000c55b 00:06:44.641 [2024-11-27 15:06:09.825809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:000083f4 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.641 [2024-11-27 15:06:09.825834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.641 #13 NEW cov: 12545 ft: 13845 corp: 9/54b lim: 30 exec/s: 0 rss: 74Mb L: 6/10 MS: 1 ChangeBinInt- 00:06:44.641 [2024-11-27 15:06:09.885745] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x5b 00:06:44.641 [2024-11-27 15:06:09.885958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:c5000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.641 [2024-11-27 15:06:09.885982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.641 #14 NEW cov: 12545 ft: 13882 corp: 10/60b lim: 30 exec/s: 0 rss: 74Mb L: 6/10 MS: 1 ShuffleBytes- 00:06:44.641 [2024-11-27 15:06:09.925879] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (272388) > buf size (4096) 00:06:44.641 [2024-11-27 15:06:09.926113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a008104 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.641 [2024-11-27 15:06:09.926138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.641 #15 NEW cov: 12545 ft: 13918 corp: 11/66b lim: 30 exec/s: 0 rss: 74Mb L: 6/10 MS: 1 ChangeBit- 00:06:44.641 [2024-11-27 15:06:09.965984] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:44.641 [2024-11-27 15:06:09.966211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0000c5 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.641 [2024-11-27 15:06:09.966236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.899 #21 NEW cov: 12545 ft: 13932 corp: 12/72b lim: 30 exec/s: 0 rss: 74Mb L: 6/10 MS: 1 ShuffleBytes- 00:06:44.899 [2024-11-27 15:06:10.006249] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001f1f 00:06:44.899 [2024-11-27 15:06:10.006643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a1f831f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.899 [2024-11-27 15:06:10.006676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.899 #26 NEW cov: 12545 ft: 14026 corp: 13/79b lim: 30 exec/s: 0 rss: 74Mb L: 7/10 MS: 5 ChangeBit-CrossOver-CopyPart-ChangeBit-InsertRepeatedBytes- 00:06:44.899 [2024-11-27 15:06:10.046218] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001f91 00:06:44.899 [2024-11-27 15:06:10.046441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a1f831f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.899 [2024-11-27 15:06:10.046466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.899 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:44.899 #27 NEW cov: 12568 ft: 14081 corp: 14/86b lim: 30 exec/s: 0 rss: 74Mb L: 7/10 MS: 1 ChangeByte- 00:06:44.899 [2024-11-27 15:06:10.106445] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001fd6 00:06:44.899 [2024-11-27 15:06:10.106682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a1f831f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.899 [2024-11-27 15:06:10.106707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.899 #28 NEW cov: 12568 ft: 14093 corp: 15/94b lim: 30 exec/s: 0 rss: 74Mb L: 8/10 MS: 1 InsertByte- 00:06:44.899 [2024-11-27 15:06:10.146508] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:44.899 [2024-11-27 15:06:10.146744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0000ef cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.899 [2024-11-27 15:06:10.146770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.899 #29 NEW cov: 12568 ft: 14120 corp: 16/102b lim: 30 exec/s: 29 rss: 74Mb L: 8/10 MS: 1 InsertByte- 00:06:44.899 [2024-11-27 15:06:10.206728] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (360452) > buf size (4096) 00:06:44.899 [2024-11-27 15:06:10.206944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:60008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.899 [2024-11-27 15:06:10.206970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.899 #30 NEW cov: 12568 ft: 14230 corp: 17/109b lim: 30 exec/s: 30 rss: 74Mb L: 7/10 MS: 1 InsertByte- 00:06:45.157 [2024-11-27 15:06:10.246793] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000033f 00:06:45.157 [2024-11-27 15:06:10.247009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.157 [2024-11-27 15:06:10.247035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.157 #31 NEW cov: 12568 ft: 14248 corp: 18/120b lim: 30 exec/s: 31 rss: 74Mb L: 11/11 MS: 1 InsertByte- 00:06:45.157 [2024-11-27 15:06:10.307035] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x5b 00:06:45.157 [2024-11-27 15:06:10.307255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2a0000c5 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.157 [2024-11-27 15:06:10.307280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.157 #32 NEW cov: 12568 ft: 14254 corp: 19/126b lim: 30 exec/s: 32 rss: 74Mb L: 6/11 MS: 1 ChangeByte- 00:06:45.157 [2024-11-27 15:06:10.347088] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (796676) > buf size (4096) 00:06:45.157 [2024-11-27 15:06:10.347309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.157 [2024-11-27 15:06:10.347336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.157 #33 NEW cov: 12568 ft: 14273 corp: 20/133b lim: 30 exec/s: 33 rss: 74Mb L: 7/11 MS: 1 ChangeBit- 00:06:45.157 [2024-11-27 15:06:10.387217] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001fd6 00:06:45.157 [2024-11-27 15:06:10.387439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a1f831f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.157 [2024-11-27 15:06:10.387465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.157 #34 NEW cov: 12568 ft: 14374 corp: 21/142b lim: 30 exec/s: 34 rss: 74Mb L: 9/11 MS: 1 InsertByte- 00:06:45.157 [2024-11-27 15:06:10.447352] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100007103 00:06:45.157 [2024-11-27 15:06:10.447572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.157 [2024-11-27 15:06:10.447602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.157 #35 NEW cov: 12568 ft: 14391 corp: 22/153b lim: 30 exec/s: 35 rss: 74Mb L: 11/11 MS: 1 InsertByte- 00:06:45.157 [2024-11-27 15:06:10.487422] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x5b 00:06:45.158 [2024-11-27 15:06:10.487634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:c5000004 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.158 [2024-11-27 15:06:10.487659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.416 #36 NEW cov: 12568 ft: 14417 corp: 23/159b lim: 30 exec/s: 36 rss: 74Mb L: 6/11 MS: 1 ChangeBinInt- 00:06:45.416 [2024-11-27 15:06:10.547648] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xc55b 00:06:45.416 [2024-11-27 15:06:10.547879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:03000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.416 [2024-11-27 15:06:10.547903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.416 #37 NEW cov: 12568 ft: 14426 corp: 24/165b lim: 30 exec/s: 37 rss: 74Mb L: 6/11 MS: 1 PersAutoDict- DE: "\003\000\000\000"- 00:06:45.416 [2024-11-27 15:06:10.587794] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (360452) > buf size (4096) 00:06:45.416 [2024-11-27 15:06:10.587920] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (93188) > buf size (4096) 00:06:45.416 [2024-11-27 15:06:10.588254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:60008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.416 [2024-11-27 15:06:10.588279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.416 [2024-11-27 15:06:10.588336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5b000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.416 [2024-11-27 15:06:10.588353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.416 [2024-11-27 15:06:10.588407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.416 [2024-11-27 15:06:10.588421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.416 #38 NEW cov: 12585 ft: 14906 corp: 25/185b lim: 30 exec/s: 38 rss: 74Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:06:45.416 [2024-11-27 15:06:10.647918] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (787224) > buf size (4096) 00:06:45.416 [2024-11-27 15:06:10.648138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00c58300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.416 [2024-11-27 15:06:10.648163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.416 #39 NEW cov: 12585 ft: 14927 corp: 26/191b lim: 30 exec/s: 39 rss: 74Mb L: 6/20 MS: 1 ShuffleBytes- 00:06:45.416 [2024-11-27 15:06:10.688041] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (272388) > buf size (4096) 00:06:45.416 [2024-11-27 15:06:10.688258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.416 [2024-11-27 15:06:10.688283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.416 #40 NEW cov: 12585 ft: 14957 corp: 27/197b lim: 30 exec/s: 40 rss: 74Mb L: 6/20 MS: 1 ChangeBit- 00:06:45.416 [2024-11-27 15:06:10.728188] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200008e03 00:06:45.416 [2024-11-27 15:06:10.728411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0af902ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.416 [2024-11-27 15:06:10.728436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.675 #41 NEW cov: 12585 ft: 14974 corp: 28/208b lim: 30 exec/s: 41 rss: 74Mb L: 11/20 MS: 1 ChangeBinInt- 00:06:45.675 [2024-11-27 15:06:10.788330] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x5b 00:06:45.675 [2024-11-27 15:06:10.788549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:c5000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.675 [2024-11-27 15:06:10.788574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.675 #42 NEW cov: 12585 ft: 14978 corp: 29/214b lim: 30 exec/s: 42 rss: 74Mb L: 6/20 MS: 1 ShuffleBytes- 00:06:45.675 [2024-11-27 15:06:10.828462] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001f1f 00:06:45.675 [2024-11-27 15:06:10.828693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aaa831f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.675 [2024-11-27 15:06:10.828718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.675 #43 NEW cov: 12585 ft: 15006 corp: 30/223b lim: 30 exec/s: 43 rss: 74Mb L: 9/20 MS: 1 InsertByte- 00:06:45.675 [2024-11-27 15:06:10.868590] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000c55b 00:06:45.675 [2024-11-27 15:06:10.868817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:000083f6 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.675 [2024-11-27 15:06:10.868842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.675 #44 NEW cov: 12585 ft: 15064 corp: 31/229b lim: 30 exec/s: 44 rss: 75Mb L: 6/20 MS: 1 ChangeBit- 00:06:45.675 [2024-11-27 15:06:10.928945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:000000c5 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.675 [2024-11-27 15:06:10.928971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.675 #45 NEW cov: 12585 ft: 15136 corp: 32/235b lim: 30 exec/s: 45 rss: 75Mb L: 6/20 MS: 1 CopyPart- 00:06:45.675 [2024-11-27 15:06:10.968850] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x5d 00:06:45.675 [2024-11-27 15:06:10.969074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:c5000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.675 [2024-11-27 15:06:10.969099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.675 #46 NEW cov: 12585 ft: 15147 corp: 33/242b lim: 30 exec/s: 46 rss: 75Mb L: 7/20 MS: 1 InsertByte- 00:06:45.934 [2024-11-27 15:06:11.029021] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200008e03 00:06:45.934 [2024-11-27 15:06:11.029242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0af902ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.934 [2024-11-27 15:06:11.029267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.934 #47 NEW cov: 12585 ft: 15154 corp: 34/253b lim: 30 exec/s: 47 rss: 75Mb L: 11/20 MS: 1 ShuffleBytes- 00:06:45.934 [2024-11-27 15:06:11.089120] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x5d 00:06:45.934 [2024-11-27 15:06:11.089332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:c5000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.934 [2024-11-27 15:06:11.089358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.934 #48 NEW cov: 12585 ft: 15185 corp: 35/260b lim: 30 exec/s: 48 rss: 75Mb L: 7/20 MS: 1 ChangeByte- 00:06:45.934 [2024-11-27 15:06:11.149298] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (796676) > buf size (4096) 00:06:45.934 [2024-11-27 15:06:11.149515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a00831f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.934 [2024-11-27 15:06:11.149540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.934 #49 NEW cov: 12585 ft: 15212 corp: 36/267b lim: 30 exec/s: 24 rss: 75Mb L: 7/20 MS: 1 CrossOver- 00:06:45.934 #49 DONE cov: 12585 ft: 15212 corp: 36/267b lim: 30 exec/s: 24 rss: 75Mb 00:06:45.934 ###### Recommended dictionary. ###### 00:06:45.934 "\000\000\000\000" # Uses: 0 00:06:45.934 "\003\000\000\000" # Uses: 1 00:06:45.934 ###### End of recommended dictionary. ###### 00:06:45.934 Done 49 runs in 2 second(s) 00:06:46.192 15:06:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:46.192 15:06:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:46.192 15:06:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:46.192 15:06:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:46.192 15:06:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:46.192 15:06:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:46.192 15:06:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:46.192 15:06:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:46.192 15:06:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:46.192 15:06:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:46.192 15:06:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:46.192 15:06:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:46.192 15:06:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:46.192 15:06:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:46.192 15:06:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:46.192 15:06:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:46.192 15:06:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:46.192 15:06:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:46.192 15:06:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:46.192 [2024-11-27 15:06:11.324573] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:46.192 [2024-11-27 15:06:11.324647] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366232 ] 00:06:46.451 [2024-11-27 15:06:11.582662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.451 [2024-11-27 15:06:11.642338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.451 [2024-11-27 15:06:11.701747] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.451 [2024-11-27 15:06:11.718105] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:46.451 INFO: Running with entropic power schedule (0xFF, 100). 00:06:46.451 INFO: Seed: 433539038 00:06:46.451 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:06:46.451 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:06:46.451 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:46.451 INFO: A corpus is not provided, starting from an empty corpus 00:06:46.451 #2 INITED exec/s: 0 rss: 65Mb 00:06:46.451 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:46.451 This may also happen if the target rejected all inputs we tried so far 00:06:46.451 [2024-11-27 15:06:11.783432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.451 [2024-11-27 15:06:11.783462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.969 NEW_FUNC[1/716]: 0x43ef78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:46.969 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:46.969 #3 NEW cov: 12256 ft: 12245 corp: 2/14b lim: 35 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 InsertRepeatedBytes- 00:06:46.969 [2024-11-27 15:06:12.124313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3e00000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.969 [2024-11-27 15:06:12.124347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.969 #4 NEW cov: 12386 ft: 12847 corp: 3/23b lim: 35 exec/s: 0 rss: 73Mb L: 9/13 MS: 1 CMP- DE: ">\000\000\000\000\000\000\000"- 00:06:46.969 [2024-11-27 15:06:12.164338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.969 [2024-11-27 15:06:12.164364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.969 #5 NEW cov: 12392 ft: 13151 corp: 4/36b lim: 35 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 ChangeBit- 00:06:46.969 [2024-11-27 15:06:12.224886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002c cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.969 [2024-11-27 15:06:12.224912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.969 [2024-11-27 15:06:12.224984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.969 [2024-11-27 15:06:12.224998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.969 [2024-11-27 15:06:12.225051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.969 [2024-11-27 15:06:12.225065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.970 [2024-11-27 15:06:12.225118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.970 [2024-11-27 15:06:12.225132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.970 #7 NEW cov: 12477 ft: 14035 corp: 5/66b lim: 35 exec/s: 0 rss: 73Mb L: 30/30 MS: 2 ChangeByte-InsertRepeatedBytes- 00:06:46.970 [2024-11-27 15:06:12.264608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.970 [2024-11-27 15:06:12.264634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.970 #8 NEW cov: 12477 ft: 14197 corp: 6/79b lim: 35 exec/s: 0 rss: 73Mb L: 13/30 MS: 1 ShuffleBytes- 00:06:47.228 [2024-11-27 15:06:12.325024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002c cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.228 [2024-11-27 15:06:12.325050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.228 [2024-11-27 15:06:12.325105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.228 [2024-11-27 15:06:12.325119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.228 [2024-11-27 15:06:12.325172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.228 [2024-11-27 15:06:12.325185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.228 #9 NEW cov: 12477 ft: 14471 corp: 7/100b lim: 35 exec/s: 0 rss: 73Mb L: 21/30 MS: 1 EraseBytes- 00:06:47.228 [2024-11-27 15:06:12.385154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002c cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.228 [2024-11-27 15:06:12.385179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.228 [2024-11-27 15:06:12.385233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.228 [2024-11-27 15:06:12.385248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.228 [2024-11-27 15:06:12.385302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.228 [2024-11-27 15:06:12.385315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.228 #10 NEW cov: 12477 ft: 14547 corp: 8/121b lim: 35 exec/s: 0 rss: 73Mb L: 21/30 MS: 1 ShuffleBytes- 00:06:47.228 [2024-11-27 15:06:12.445045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.228 [2024-11-27 15:06:12.445070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.228 #11 NEW cov: 12477 ft: 14641 corp: 9/134b lim: 35 exec/s: 0 rss: 73Mb L: 13/30 MS: 1 ChangeBit- 00:06:47.228 [2024-11-27 15:06:12.485200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.228 [2024-11-27 15:06:12.485225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.228 #12 NEW cov: 12477 ft: 14657 corp: 10/144b lim: 35 exec/s: 0 rss: 73Mb L: 10/30 MS: 1 EraseBytes- 00:06:47.228 [2024-11-27 15:06:12.525294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.228 [2024-11-27 15:06:12.525320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.228 #13 NEW cov: 12477 ft: 14715 corp: 11/157b lim: 35 exec/s: 0 rss: 73Mb L: 13/30 MS: 1 ChangeBit- 00:06:47.228 [2024-11-27 15:06:12.565529] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:47.228 [2024-11-27 15:06:12.565761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002c cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.228 [2024-11-27 15:06:12.565788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.228 [2024-11-27 15:06:12.565843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff3e00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.228 [2024-11-27 15:06:12.565859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.229 [2024-11-27 15:06:12.565914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.229 [2024-11-27 15:06:12.565930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.488 #14 NEW cov: 12488 ft: 14792 corp: 12/178b lim: 35 exec/s: 0 rss: 73Mb L: 21/30 MS: 1 PersAutoDict- DE: ">\000\000\000\000\000\000\000"- 00:06:47.488 [2024-11-27 15:06:12.626054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:8d008d8d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.488 [2024-11-27 15:06:12.626082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.488 [2024-11-27 15:06:12.626139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:8d8d008d cdw11:8d008d8d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.488 [2024-11-27 15:06:12.626153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.488 [2024-11-27 15:06:12.626210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:8d8d008d cdw11:ff008dff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.488 [2024-11-27 15:06:12.626224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.488 [2024-11-27 15:06:12.626279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff007f cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.488 [2024-11-27 15:06:12.626293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.488 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:47.488 #15 NEW cov: 12511 ft: 14863 corp: 13/206b lim: 35 exec/s: 0 rss: 74Mb L: 28/30 MS: 1 InsertRepeatedBytes- 00:06:47.488 [2024-11-27 15:06:12.685786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f7ff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.488 [2024-11-27 15:06:12.685813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.488 #16 NEW cov: 12511 ft: 14912 corp: 14/219b lim: 35 exec/s: 0 rss: 74Mb L: 13/30 MS: 1 ChangeBit- 00:06:47.488 [2024-11-27 15:06:12.725884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:00003e00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.488 [2024-11-27 15:06:12.725911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.488 #17 NEW cov: 12511 ft: 14944 corp: 15/232b lim: 35 exec/s: 0 rss: 74Mb L: 13/30 MS: 1 PersAutoDict- DE: ">\000\000\000\000\000\000\000"- 00:06:47.488 [2024-11-27 15:06:12.766232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002c cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.488 [2024-11-27 15:06:12.766258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.488 [2024-11-27 15:06:12.766345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.488 [2024-11-27 15:06:12.766359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.489 [2024-11-27 15:06:12.766413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.489 [2024-11-27 15:06:12.766426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.489 #18 NEW cov: 12511 ft: 14968 corp: 16/253b lim: 35 exec/s: 18 rss: 74Mb L: 21/30 MS: 1 ChangeByte- 00:06:47.489 [2024-11-27 15:06:12.806379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002c cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.489 [2024-11-27 15:06:12.806405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.489 [2024-11-27 15:06:12.806461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.489 [2024-11-27 15:06:12.806476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.489 [2024-11-27 15:06:12.806530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.489 [2024-11-27 15:06:12.806544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.747 #24 NEW cov: 12511 ft: 15034 corp: 17/274b lim: 35 exec/s: 24 rss: 74Mb L: 21/30 MS: 1 ChangeBinInt- 00:06:47.747 [2024-11-27 15:06:12.846513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.747 [2024-11-27 15:06:12.846539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.747 [2024-11-27 15:06:12.846595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fbff00ef cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.747 [2024-11-27 15:06:12.846615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.747 [2024-11-27 15:06:12.846672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:fbff00ef cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.747 [2024-11-27 15:06:12.846685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.747 #25 NEW cov: 12511 ft: 15073 corp: 18/296b lim: 35 exec/s: 25 rss: 74Mb L: 22/30 MS: 1 CopyPart- 00:06:47.747 [2024-11-27 15:06:12.906500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.747 [2024-11-27 15:06:12.906525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.747 [2024-11-27 15:06:12.906581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fbff00ef cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.747 [2024-11-27 15:06:12.906596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.747 #26 NEW cov: 12511 ft: 15264 corp: 19/316b lim: 35 exec/s: 26 rss: 74Mb L: 20/30 MS: 1 CopyPart- 00:06:47.747 [2024-11-27 15:06:12.946509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.747 [2024-11-27 15:06:12.946535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.747 #27 NEW cov: 12511 ft: 15288 corp: 20/326b lim: 35 exec/s: 27 rss: 74Mb L: 10/30 MS: 1 CrossOver- 00:06:47.747 [2024-11-27 15:06:13.006683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.747 [2024-11-27 15:06:13.006709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.747 #28 NEW cov: 12511 ft: 15372 corp: 21/339b lim: 35 exec/s: 28 rss: 74Mb L: 13/30 MS: 1 CopyPart- 00:06:47.747 [2024-11-27 15:06:13.066937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.747 [2024-11-27 15:06:13.066962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.747 [2024-11-27 15:06:13.067034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.747 [2024-11-27 15:06:13.067048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.007 #29 NEW cov: 12511 ft: 15429 corp: 22/356b lim: 35 exec/s: 29 rss: 74Mb L: 17/30 MS: 1 CrossOver- 00:06:48.007 [2024-11-27 15:06:13.107054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffef000a cdw11:ff00fbff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.007 [2024-11-27 15:06:13.107079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.007 [2024-11-27 15:06:13.107133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffef00ff cdw11:ff00fbff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.007 [2024-11-27 15:06:13.107148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.007 #30 NEW cov: 12511 ft: 15441 corp: 23/373b lim: 35 exec/s: 30 rss: 74Mb L: 17/30 MS: 1 EraseBytes- 00:06:48.007 [2024-11-27 15:06:13.167245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.007 [2024-11-27 15:06:13.167270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.007 [2024-11-27 15:06:13.167339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.007 [2024-11-27 15:06:13.167359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.007 #31 NEW cov: 12511 ft: 15449 corp: 24/390b lim: 35 exec/s: 31 rss: 74Mb L: 17/30 MS: 1 ChangeByte- 00:06:48.007 [2024-11-27 15:06:13.227269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.007 [2024-11-27 15:06:13.227293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.007 #32 NEW cov: 12511 ft: 15517 corp: 25/403b lim: 35 exec/s: 32 rss: 74Mb L: 13/30 MS: 1 ChangeBinInt- 00:06:48.007 [2024-11-27 15:06:13.267388] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:48.007 [2024-11-27 15:06:13.267730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff3e002c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.007 [2024-11-27 15:06:13.267755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.007 [2024-11-27 15:06:13.267810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.007 [2024-11-27 15:06:13.267827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.007 [2024-11-27 15:06:13.267879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.007 [2024-11-27 15:06:13.267893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.007 #33 NEW cov: 12511 ft: 15540 corp: 26/424b lim: 35 exec/s: 33 rss: 74Mb L: 21/30 MS: 1 PersAutoDict- DE: ">\000\000\000\000\000\000\000"- 00:06:48.007 [2024-11-27 15:06:13.307485] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:48.007 [2024-11-27 15:06:13.307806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff3e002c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.007 [2024-11-27 15:06:13.307832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.007 [2024-11-27 15:06:13.307887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.007 [2024-11-27 15:06:13.307903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.007 [2024-11-27 15:06:13.307956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.007 [2024-11-27 15:06:13.307970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.265 #34 NEW cov: 12511 ft: 15556 corp: 27/445b lim: 35 exec/s: 34 rss: 74Mb L: 21/30 MS: 1 ShuffleBytes- 00:06:48.265 [2024-11-27 15:06:13.367632] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:48.265 [2024-11-27 15:06:13.367765] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:48.265 [2024-11-27 15:06:13.367987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff3e002c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.265 [2024-11-27 15:06:13.368012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.265 [2024-11-27 15:06:13.368064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.265 [2024-11-27 15:06:13.368083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.265 [2024-11-27 15:06:13.368136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.265 [2024-11-27 15:06:13.368152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.265 #35 NEW cov: 12511 ft: 15628 corp: 28/466b lim: 35 exec/s: 35 rss: 74Mb L: 21/30 MS: 1 PersAutoDict- DE: ">\000\000\000\000\000\000\000"- 00:06:48.265 [2024-11-27 15:06:13.407935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff0d000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.265 [2024-11-27 15:06:13.407959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.265 [2024-11-27 15:06:13.408028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.265 [2024-11-27 15:06:13.408043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.265 #36 NEW cov: 12511 ft: 15633 corp: 29/480b lim: 35 exec/s: 36 rss: 74Mb L: 14/30 MS: 1 InsertByte- 00:06:48.265 [2024-11-27 15:06:13.448198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002c cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.265 [2024-11-27 15:06:13.448223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.265 [2024-11-27 15:06:13.448305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0aff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.265 [2024-11-27 15:06:13.448319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.265 [2024-11-27 15:06:13.448370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.265 [2024-11-27 15:06:13.448383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.265 #37 NEW cov: 12511 ft: 15637 corp: 30/503b lim: 35 exec/s: 37 rss: 74Mb L: 23/30 MS: 1 CrossOver- 00:06:48.265 [2024-11-27 15:06:13.488524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002c cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.265 [2024-11-27 15:06:13.488549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.265 [2024-11-27 15:06:13.488626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.265 [2024-11-27 15:06:13.488641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.265 [2024-11-27 15:06:13.488696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:cdcd00cd cdw11:cd00cdcd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.265 [2024-11-27 15:06:13.488709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.265 [2024-11-27 15:06:13.488765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:cdcd00cd cdw11:cd00cdcd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.265 [2024-11-27 15:06:13.488779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.265 [2024-11-27 15:06:13.488831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.265 [2024-11-27 15:06:13.488848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:48.265 #38 NEW cov: 12511 ft: 15681 corp: 31/538b lim: 35 exec/s: 38 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:48.265 [2024-11-27 15:06:13.528265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.265 [2024-11-27 15:06:13.528290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.265 [2024-11-27 15:06:13.528361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.265 [2024-11-27 15:06:13.528375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.265 #39 NEW cov: 12511 ft: 15712 corp: 32/555b lim: 35 exec/s: 39 rss: 74Mb L: 17/35 MS: 1 ChangeBit- 00:06:48.265 [2024-11-27 15:06:13.568632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.265 [2024-11-27 15:06:13.568656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.265 [2024-11-27 15:06:13.568712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.265 [2024-11-27 15:06:13.568726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.265 [2024-11-27 15:06:13.568778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0aff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.265 [2024-11-27 15:06:13.568792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.265 [2024-11-27 15:06:13.568844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:effb00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.265 [2024-11-27 15:06:13.568857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.265 #40 NEW cov: 12511 ft: 15727 corp: 33/584b lim: 35 exec/s: 40 rss: 74Mb L: 29/35 MS: 1 CrossOver- 00:06:48.523 [2024-11-27 15:06:13.608796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.523 [2024-11-27 15:06:13.608821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.523 [2024-11-27 15:06:13.608890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.523 [2024-11-27 15:06:13.608905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.523 [2024-11-27 15:06:13.608958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0aff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.523 [2024-11-27 15:06:13.608971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.523 [2024-11-27 15:06:13.609025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:fb00ffef SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.523 [2024-11-27 15:06:13.609038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.523 #41 NEW cov: 12511 ft: 15751 corp: 34/616b lim: 35 exec/s: 41 rss: 75Mb L: 32/35 MS: 1 CrossOver- 00:06:48.523 [2024-11-27 15:06:13.668699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffef000a cdw11:ff00fbfe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.523 [2024-11-27 15:06:13.668727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.523 [2024-11-27 15:06:13.668781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffef00ff cdw11:ff00fbff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.523 [2024-11-27 15:06:13.668795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.523 #42 NEW cov: 12511 ft: 15756 corp: 35/633b lim: 35 exec/s: 42 rss: 75Mb L: 17/35 MS: 1 ChangeBit- 00:06:48.523 [2024-11-27 15:06:13.728922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002c cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.523 [2024-11-27 15:06:13.728947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.523 [2024-11-27 15:06:13.729001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0aff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.523 [2024-11-27 15:06:13.729017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.524 [2024-11-27 15:06:13.729067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.524 [2024-11-27 15:06:13.729080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.524 #43 NEW cov: 12511 ft: 15763 corp: 36/656b lim: 35 exec/s: 21 rss: 75Mb L: 23/35 MS: 1 ChangeBinInt- 00:06:48.524 #43 DONE cov: 12511 ft: 15763 corp: 36/656b lim: 35 exec/s: 21 rss: 75Mb 00:06:48.524 ###### Recommended dictionary. ###### 00:06:48.524 ">\000\000\000\000\000\000\000" # Uses: 4 00:06:48.524 ###### End of recommended dictionary. ###### 00:06:48.524 Done 43 runs in 2 second(s) 00:06:48.782 15:06:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:48.782 15:06:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:48.782 15:06:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:48.783 15:06:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:48.783 15:06:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:48.783 15:06:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:48.783 15:06:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:48.783 15:06:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:48.783 15:06:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:48.783 15:06:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:48.783 15:06:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:48.783 15:06:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:48.783 15:06:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:48.783 15:06:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:48.783 15:06:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:48.783 15:06:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:48.783 15:06:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:48.783 15:06:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:48.783 15:06:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:48.783 [2024-11-27 15:06:13.923255] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:48.783 [2024-11-27 15:06:13.923319] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366563 ] 00:06:48.783 [2024-11-27 15:06:14.111881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.041 [2024-11-27 15:06:14.148581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.041 [2024-11-27 15:06:14.208102] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.041 [2024-11-27 15:06:14.224461] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:49.041 INFO: Running with entropic power schedule (0xFF, 100). 00:06:49.041 INFO: Seed: 2939541718 00:06:49.041 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:06:49.041 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:06:49.041 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:49.041 INFO: A corpus is not provided, starting from an empty corpus 00:06:49.041 #2 INITED exec/s: 0 rss: 65Mb 00:06:49.041 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:49.041 This may also happen if the target rejected all inputs we tried so far 00:06:49.300 NEW_FUNC[1/704]: 0x440c58 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:49.300 NEW_FUNC[2/704]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:49.300 #7 NEW cov: 12189 ft: 12191 corp: 2/21b lim: 20 exec/s: 0 rss: 73Mb L: 20/20 MS: 5 ChangeByte-ChangeByte-ChangeByte-InsertByte-InsertRepeatedBytes- 00:06:49.557 NEW_FUNC[1/1]: 0x1f993c8 in _get_thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:345 00:06:49.557 #8 NEW cov: 12307 ft: 12889 corp: 3/41b lim: 20 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 ShuffleBytes- 00:06:49.557 #9 NEW cov: 12313 ft: 13154 corp: 4/61b lim: 20 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 ChangeBit- 00:06:49.557 #10 NEW cov: 12398 ft: 13448 corp: 5/81b lim: 20 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 ChangeByte- 00:06:49.557 #11 NEW cov: 12398 ft: 13599 corp: 6/101b lim: 20 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 CrossOver- 00:06:49.557 #17 NEW cov: 12403 ft: 14036 corp: 7/110b lim: 20 exec/s: 0 rss: 74Mb L: 9/20 MS: 1 CMP- DE: "\251\364B\316\222u\222\000"- 00:06:49.816 #18 NEW cov: 12404 ft: 14104 corp: 8/126b lim: 20 exec/s: 0 rss: 74Mb L: 16/20 MS: 1 CrossOver- 00:06:49.816 #19 NEW cov: 12404 ft: 14148 corp: 9/142b lim: 20 exec/s: 0 rss: 74Mb L: 16/20 MS: 1 ShuffleBytes- 00:06:49.816 #20 NEW cov: 12404 ft: 14179 corp: 10/151b lim: 20 exec/s: 0 rss: 74Mb L: 9/20 MS: 1 CrossOver- 00:06:50.074 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:50.074 #21 NEW cov: 12427 ft: 14275 corp: 11/171b lim: 20 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:06:50.074 #22 NEW cov: 12427 ft: 14320 corp: 12/189b lim: 20 exec/s: 0 rss: 74Mb L: 18/20 MS: 1 CrossOver- 00:06:50.074 #24 NEW cov: 12427 ft: 14596 corp: 13/193b lim: 20 exec/s: 24 rss: 74Mb L: 4/20 MS: 2 InsertByte-CopyPart- 00:06:50.074 #25 NEW cov: 12427 ft: 14624 corp: 14/213b lim: 20 exec/s: 25 rss: 74Mb L: 20/20 MS: 1 ChangeBit- 00:06:50.074 #26 NEW cov: 12427 ft: 14685 corp: 15/233b lim: 20 exec/s: 26 rss: 74Mb L: 20/20 MS: 1 ChangeByte- 00:06:50.333 #27 NEW cov: 12427 ft: 14759 corp: 16/253b lim: 20 exec/s: 27 rss: 74Mb L: 20/20 MS: 1 ShuffleBytes- 00:06:50.333 #28 NEW cov: 12427 ft: 14800 corp: 17/273b lim: 20 exec/s: 28 rss: 74Mb L: 20/20 MS: 1 ChangeBinInt- 00:06:50.333 #29 NEW cov: 12427 ft: 14814 corp: 18/291b lim: 20 exec/s: 29 rss: 74Mb L: 18/20 MS: 1 CopyPart- 00:06:50.333 #30 NEW cov: 12427 ft: 14888 corp: 19/295b lim: 20 exec/s: 30 rss: 74Mb L: 4/20 MS: 1 CrossOver- 00:06:50.591 #31 NEW cov: 12427 ft: 14899 corp: 20/315b lim: 20 exec/s: 31 rss: 75Mb L: 20/20 MS: 1 CMP- DE: "\001\000\000\037"- 00:06:50.591 #32 NEW cov: 12427 ft: 14912 corp: 21/335b lim: 20 exec/s: 32 rss: 75Mb L: 20/20 MS: 1 CrossOver- 00:06:50.591 #33 NEW cov: 12427 ft: 14951 corp: 22/339b lim: 20 exec/s: 33 rss: 75Mb L: 4/20 MS: 1 ChangeBinInt- 00:06:50.591 #34 NEW cov: 12427 ft: 14957 corp: 23/348b lim: 20 exec/s: 34 rss: 75Mb L: 9/20 MS: 1 CopyPart- 00:06:50.849 #35 NEW cov: 12427 ft: 15040 corp: 24/359b lim: 20 exec/s: 35 rss: 75Mb L: 11/20 MS: 1 EraseBytes- 00:06:50.849 #36 NEW cov: 12427 ft: 15056 corp: 25/379b lim: 20 exec/s: 36 rss: 75Mb L: 20/20 MS: 1 ChangeByte- 00:06:50.849 #37 NEW cov: 12427 ft: 15063 corp: 26/399b lim: 20 exec/s: 37 rss: 75Mb L: 20/20 MS: 1 ChangeBit- 00:06:50.849 #38 NEW cov: 12427 ft: 15077 corp: 27/419b lim: 20 exec/s: 38 rss: 75Mb L: 20/20 MS: 1 ShuffleBytes- 00:06:51.108 #39 NEW cov: 12427 ft: 15087 corp: 28/429b lim: 20 exec/s: 39 rss: 75Mb L: 10/20 MS: 1 InsertByte- 00:06:51.108 #40 NEW cov: 12427 ft: 15094 corp: 29/440b lim: 20 exec/s: 40 rss: 75Mb L: 11/20 MS: 1 InsertByte- 00:06:51.108 #46 NEW cov: 12427 ft: 15122 corp: 30/456b lim: 20 exec/s: 23 rss: 75Mb L: 16/20 MS: 1 ShuffleBytes- 00:06:51.108 #46 DONE cov: 12427 ft: 15122 corp: 30/456b lim: 20 exec/s: 23 rss: 75Mb 00:06:51.108 ###### Recommended dictionary. ###### 00:06:51.108 "\251\364B\316\222u\222\000" # Uses: 0 00:06:51.108 "\001\000\000\037" # Uses: 0 00:06:51.108 ###### End of recommended dictionary. ###### 00:06:51.108 Done 46 runs in 2 second(s) 00:06:51.108 15:06:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:51.108 15:06:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:51.108 15:06:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:51.108 15:06:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:51.108 15:06:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:51.108 15:06:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:51.108 15:06:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:51.109 15:06:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:51.109 15:06:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:51.109 15:06:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:51.109 15:06:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:51.109 15:06:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:51.109 15:06:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:51.109 15:06:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:51.109 15:06:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:51.109 15:06:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:51.109 15:06:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:51.109 15:06:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:51.109 15:06:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:51.368 [2024-11-27 15:06:16.455109] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:51.368 [2024-11-27 15:06:16.455176] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2367057 ] 00:06:51.368 [2024-11-27 15:06:16.640981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.368 [2024-11-27 15:06:16.673669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.627 [2024-11-27 15:06:16.733081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.627 [2024-11-27 15:06:16.749419] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:51.627 INFO: Running with entropic power schedule (0xFF, 100). 00:06:51.627 INFO: Seed: 1167556479 00:06:51.627 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:06:51.627 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:06:51.627 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:51.627 INFO: A corpus is not provided, starting from an empty corpus 00:06:51.627 #2 INITED exec/s: 0 rss: 66Mb 00:06:51.627 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:51.627 This may also happen if the target rejected all inputs we tried so far 00:06:51.627 [2024-11-27 15:06:16.798411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:52525252 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.627 [2024-11-27 15:06:16.798440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.627 [2024-11-27 15:06:16.798497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:52525252 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.627 [2024-11-27 15:06:16.798511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.886 NEW_FUNC[1/717]: 0x441d58 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:51.886 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:51.886 #13 NEW cov: 12296 ft: 12293 corp: 2/20b lim: 35 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:06:51.886 [2024-11-27 15:06:17.119224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:52525252 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.886 [2024-11-27 15:06:17.119257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.886 [2024-11-27 15:06:17.119312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:52525252 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.886 [2024-11-27 15:06:17.119326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.886 #14 NEW cov: 12409 ft: 12850 corp: 3/39b lim: 35 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 ShuffleBytes- 00:06:51.886 [2024-11-27 15:06:17.179103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:b5b552b5 cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.886 [2024-11-27 15:06:17.179132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.886 #16 NEW cov: 12415 ft: 13989 corp: 4/50b lim: 35 exec/s: 0 rss: 73Mb L: 11/19 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:51.886 [2024-11-27 15:06:17.219337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:52525252 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.886 [2024-11-27 15:06:17.219363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.886 [2024-11-27 15:06:17.219418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:52525252 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.886 [2024-11-27 15:06:17.219434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.145 #17 NEW cov: 12500 ft: 14223 corp: 5/69b lim: 35 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 ShuffleBytes- 00:06:52.145 [2024-11-27 15:06:17.259484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:52525252 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.145 [2024-11-27 15:06:17.259510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.145 [2024-11-27 15:06:17.259565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:52525252 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.145 [2024-11-27 15:06:17.259580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.145 #18 NEW cov: 12500 ft: 14309 corp: 6/88b lim: 35 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 ShuffleBytes- 00:06:52.145 [2024-11-27 15:06:17.319468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:b5b552b5 cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.145 [2024-11-27 15:06:17.319494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.145 #19 NEW cov: 12500 ft: 14361 corp: 7/99b lim: 35 exec/s: 0 rss: 74Mb L: 11/19 MS: 1 ChangeBinInt- 00:06:52.145 [2024-11-27 15:06:17.379637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:b5b552b5 cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.145 [2024-11-27 15:06:17.379662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.145 #20 NEW cov: 12500 ft: 14447 corp: 8/112b lim: 35 exec/s: 0 rss: 74Mb L: 13/19 MS: 1 CopyPart- 00:06:52.145 [2024-11-27 15:06:17.440097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:b5b552b5 cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.145 [2024-11-27 15:06:17.440122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.145 [2024-11-27 15:06:17.440176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:52b5b5b5 cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.145 [2024-11-27 15:06:17.440190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.145 [2024-11-27 15:06:17.440241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4bb5b5b5 cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.145 [2024-11-27 15:06:17.440254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.145 #21 NEW cov: 12500 ft: 14718 corp: 9/134b lim: 35 exec/s: 0 rss: 74Mb L: 22/22 MS: 1 CrossOver- 00:06:52.145 [2024-11-27 15:06:17.480089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:52525252 cdw11:52500002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.145 [2024-11-27 15:06:17.480114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.145 [2024-11-27 15:06:17.480168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:52525252 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.145 [2024-11-27 15:06:17.480182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.404 #22 NEW cov: 12500 ft: 14822 corp: 10/153b lim: 35 exec/s: 0 rss: 74Mb L: 19/22 MS: 1 ChangeBit- 00:06:52.404 [2024-11-27 15:06:17.540230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:52525252 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.404 [2024-11-27 15:06:17.540254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.404 [2024-11-27 15:06:17.540311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:52525252 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.404 [2024-11-27 15:06:17.540325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.404 #23 NEW cov: 12500 ft: 14896 corp: 11/172b lim: 35 exec/s: 0 rss: 74Mb L: 19/22 MS: 1 ChangeBinInt- 00:06:52.404 [2024-11-27 15:06:17.580344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:52525252 cdw11:52500002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.404 [2024-11-27 15:06:17.580369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.404 [2024-11-27 15:06:17.580421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00001300 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.404 [2024-11-27 15:06:17.580436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.404 #24 NEW cov: 12500 ft: 14925 corp: 12/191b lim: 35 exec/s: 0 rss: 74Mb L: 19/22 MS: 1 ChangeBinInt- 00:06:52.404 [2024-11-27 15:06:17.640344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:b5b552b5 cdw11:b5b50002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.404 [2024-11-27 15:06:17.640369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.404 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:52.404 #25 NEW cov: 12523 ft: 14982 corp: 13/204b lim: 35 exec/s: 0 rss: 74Mb L: 13/22 MS: 1 ShuffleBytes- 00:06:52.404 [2024-11-27 15:06:17.700550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:b5b552b5 cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.404 [2024-11-27 15:06:17.700574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.404 #26 NEW cov: 12523 ft: 15004 corp: 14/217b lim: 35 exec/s: 0 rss: 74Mb L: 13/22 MS: 1 ChangeBit- 00:06:52.404 [2024-11-27 15:06:17.740817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:52135252 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.404 [2024-11-27 15:06:17.740842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.405 [2024-11-27 15:06:17.740894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:52525252 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.405 [2024-11-27 15:06:17.740908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.663 #27 NEW cov: 12523 ft: 15032 corp: 15/236b lim: 35 exec/s: 27 rss: 74Mb L: 19/22 MS: 1 ChangeBinInt- 00:06:52.663 [2024-11-27 15:06:17.800830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:b5b5b5b5 cdw11:4b350001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.663 [2024-11-27 15:06:17.800854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.663 #28 NEW cov: 12523 ft: 15057 corp: 16/249b lim: 35 exec/s: 28 rss: 74Mb L: 13/22 MS: 1 CopyPart- 00:06:52.663 [2024-11-27 15:06:17.861253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:35b552b5 cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.663 [2024-11-27 15:06:17.861277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.663 [2024-11-27 15:06:17.861330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:52b5b5b5 cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.663 [2024-11-27 15:06:17.861344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.663 [2024-11-27 15:06:17.861397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4bb5b5b5 cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.663 [2024-11-27 15:06:17.861411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.663 #29 NEW cov: 12523 ft: 15076 corp: 17/271b lim: 35 exec/s: 29 rss: 74Mb L: 22/22 MS: 1 ChangeBit- 00:06:52.663 [2024-11-27 15:06:17.921317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:52525252 cdw11:52500002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.663 [2024-11-27 15:06:17.921342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.663 [2024-11-27 15:06:17.921395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:52525252 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.663 [2024-11-27 15:06:17.921409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.663 #30 NEW cov: 12523 ft: 15089 corp: 18/290b lim: 35 exec/s: 30 rss: 74Mb L: 19/22 MS: 1 CopyPart- 00:06:52.664 [2024-11-27 15:06:17.961223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:b5b552b5 cdw11:b5bd0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.664 [2024-11-27 15:06:17.961248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.664 #31 NEW cov: 12523 ft: 15116 corp: 19/303b lim: 35 exec/s: 31 rss: 74Mb L: 13/22 MS: 1 ChangeBinInt- 00:06:52.664 [2024-11-27 15:06:18.001395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:b5b552b5 cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.664 [2024-11-27 15:06:18.001421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.922 #32 NEW cov: 12523 ft: 15163 corp: 20/316b lim: 35 exec/s: 32 rss: 74Mb L: 13/22 MS: 1 ShuffleBytes- 00:06:52.922 [2024-11-27 15:06:18.041620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:b5b552b5 cdw11:b5b50000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.922 [2024-11-27 15:06:18.041645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.922 [2024-11-27 15:06:18.041703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00b50000 cdw11:b5b50002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.922 [2024-11-27 15:06:18.041717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.922 #33 NEW cov: 12523 ft: 15236 corp: 21/333b lim: 35 exec/s: 33 rss: 75Mb L: 17/22 MS: 1 CMP- DE: "\002\000\000\000"- 00:06:52.922 [2024-11-27 15:06:18.101829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:52525252 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.922 [2024-11-27 15:06:18.101854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.922 [2024-11-27 15:06:18.101909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:52415252 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.923 [2024-11-27 15:06:18.101922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.923 #34 NEW cov: 12523 ft: 15243 corp: 22/353b lim: 35 exec/s: 34 rss: 75Mb L: 20/22 MS: 1 InsertByte- 00:06:52.923 [2024-11-27 15:06:18.141950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:b5b552e8 cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.923 [2024-11-27 15:06:18.141975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.923 [2024-11-27 15:06:18.142035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:b54bb5b5 cdw11:35b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.923 [2024-11-27 15:06:18.142049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.923 #35 NEW cov: 12523 ft: 15258 corp: 23/367b lim: 35 exec/s: 35 rss: 75Mb L: 14/22 MS: 1 InsertByte- 00:06:52.923 [2024-11-27 15:06:18.202259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:35b552b5 cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.923 [2024-11-27 15:06:18.202284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.923 [2024-11-27 15:06:18.202339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:52b5b5b5 cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.923 [2024-11-27 15:06:18.202353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.923 [2024-11-27 15:06:18.202408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4bb5b5bd cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.923 [2024-11-27 15:06:18.202422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.923 #36 NEW cov: 12523 ft: 15291 corp: 24/389b lim: 35 exec/s: 36 rss: 75Mb L: 22/22 MS: 1 ChangeBit- 00:06:53.182 [2024-11-27 15:06:18.262272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:b9b9b9b9 cdw11:b9b90001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.182 [2024-11-27 15:06:18.262298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.182 [2024-11-27 15:06:18.262354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:b9b9b9b9 cdw11:b9b90001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.182 [2024-11-27 15:06:18.262368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.182 #37 NEW cov: 12523 ft: 15294 corp: 25/409b lim: 35 exec/s: 37 rss: 75Mb L: 20/22 MS: 1 InsertRepeatedBytes- 00:06:53.182 [2024-11-27 15:06:18.302714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.182 [2024-11-27 15:06:18.302739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.182 [2024-11-27 15:06:18.302792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.182 [2024-11-27 15:06:18.302806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.182 [2024-11-27 15:06:18.302858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e5e5e5e5 cdw11:e5e50003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.182 [2024-11-27 15:06:18.302872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.182 [2024-11-27 15:06:18.302925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e5e5e5e5 cdw11:b5b50002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.182 [2024-11-27 15:06:18.302938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.182 #40 NEW cov: 12523 ft: 15633 corp: 26/439b lim: 35 exec/s: 40 rss: 75Mb L: 30/30 MS: 3 CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:06:53.182 [2024-11-27 15:06:18.342475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:52525252 cdw11:52500002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.182 [2024-11-27 15:06:18.342503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.182 [2024-11-27 15:06:18.342557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:52525252 cdw11:52410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.182 [2024-11-27 15:06:18.342571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.182 #41 NEW cov: 12523 ft: 15641 corp: 27/459b lim: 35 exec/s: 41 rss: 75Mb L: 20/30 MS: 1 InsertByte- 00:06:53.182 [2024-11-27 15:06:18.402859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:35b552b5 cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.182 [2024-11-27 15:06:18.402884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.182 [2024-11-27 15:06:18.402939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:52b5b5b5 cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.182 [2024-11-27 15:06:18.402952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.182 [2024-11-27 15:06:18.403003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4bb5b5b5 cdw11:b5350001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.182 [2024-11-27 15:06:18.403016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.182 #42 NEW cov: 12523 ft: 15654 corp: 28/481b lim: 35 exec/s: 42 rss: 75Mb L: 22/30 MS: 1 ChangeBit- 00:06:53.182 [2024-11-27 15:06:18.442933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:35b552b5 cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.182 [2024-11-27 15:06:18.442958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.182 [2024-11-27 15:06:18.443011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:52b5b5b5 cdw11:28b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.182 [2024-11-27 15:06:18.443024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.182 [2024-11-27 15:06:18.443078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4bb5b5b5 cdw11:b5350001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.182 [2024-11-27 15:06:18.443091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.182 #43 NEW cov: 12523 ft: 15679 corp: 29/503b lim: 35 exec/s: 43 rss: 75Mb L: 22/30 MS: 1 ChangeByte- 00:06:53.182 [2024-11-27 15:06:18.502960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:52525252 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.182 [2024-11-27 15:06:18.502986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.182 [2024-11-27 15:06:18.503040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:adadaead cdw11:adad0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.182 [2024-11-27 15:06:18.503053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.441 #44 NEW cov: 12523 ft: 15687 corp: 30/522b lim: 35 exec/s: 44 rss: 75Mb L: 19/30 MS: 1 ChangeBinInt- 00:06:53.441 [2024-11-27 15:06:18.543028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:52525252 cdw11:52500002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.441 [2024-11-27 15:06:18.543053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.441 [2024-11-27 15:06:18.543109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:52525252 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.441 [2024-11-27 15:06:18.543125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.441 #45 NEW cov: 12523 ft: 15699 corp: 31/541b lim: 35 exec/s: 45 rss: 75Mb L: 19/30 MS: 1 CopyPart- 00:06:53.441 [2024-11-27 15:06:18.583131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:52b55252 cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.441 [2024-11-27 15:06:18.583156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.441 [2024-11-27 15:06:18.583210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:b5b5b54b cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.441 [2024-11-27 15:06:18.583224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.441 #46 NEW cov: 12523 ft: 15711 corp: 32/560b lim: 35 exec/s: 46 rss: 75Mb L: 19/30 MS: 1 CrossOver- 00:06:53.441 [2024-11-27 15:06:18.643181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:b5f152b5 cdw11:b5b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.441 [2024-11-27 15:06:18.643207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.442 #47 NEW cov: 12523 ft: 15727 corp: 33/573b lim: 35 exec/s: 47 rss: 75Mb L: 13/30 MS: 1 ChangeByte- 00:06:53.442 [2024-11-27 15:06:18.683436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:52525252 cdw11:52500002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.442 [2024-11-27 15:06:18.683461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.442 [2024-11-27 15:06:18.683515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:52525252 cdw11:52520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.442 [2024-11-27 15:06:18.683529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.442 #48 NEW cov: 12523 ft: 15774 corp: 34/593b lim: 35 exec/s: 48 rss: 75Mb L: 20/30 MS: 1 InsertByte- 00:06:53.442 [2024-11-27 15:06:18.743779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:b5f152b5 cdw11:52b50001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.442 [2024-11-27 15:06:18.743805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.442 [2024-11-27 15:06:18.743857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:b5b5b5b5 cdw11:b5b50002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.442 [2024-11-27 15:06:18.743871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.442 [2024-11-27 15:06:18.743921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:b5b5b5b5 cdw11:b54b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.442 [2024-11-27 15:06:18.743934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.702 #49 NEW cov: 12523 ft: 15779 corp: 35/616b lim: 35 exec/s: 24 rss: 75Mb L: 23/30 MS: 1 CrossOver- 00:06:53.702 #49 DONE cov: 12523 ft: 15779 corp: 35/616b lim: 35 exec/s: 24 rss: 75Mb 00:06:53.702 ###### Recommended dictionary. ###### 00:06:53.702 "\002\000\000\000" # Uses: 0 00:06:53.702 ###### End of recommended dictionary. ###### 00:06:53.702 Done 49 runs in 2 second(s) 00:06:53.702 15:06:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:53.702 15:06:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:53.702 15:06:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:53.702 15:06:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:53.702 15:06:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:53.702 15:06:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:53.702 15:06:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:53.702 15:06:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:53.702 15:06:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:53.702 15:06:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:53.702 15:06:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:53.702 15:06:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:53.702 15:06:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:53.702 15:06:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:53.702 15:06:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:53.702 15:06:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:53.702 15:06:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:53.702 15:06:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:53.702 15:06:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:53.702 [2024-11-27 15:06:18.919604] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:53.702 [2024-11-27 15:06:18.919658] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2367556 ] 00:06:53.962 [2024-11-27 15:06:19.100306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.962 [2024-11-27 15:06:19.133645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.962 [2024-11-27 15:06:19.193167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.962 [2024-11-27 15:06:19.209523] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:53.962 INFO: Running with entropic power schedule (0xFF, 100). 00:06:53.962 INFO: Seed: 3627577609 00:06:53.962 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:06:53.962 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:06:53.962 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:53.962 INFO: A corpus is not provided, starting from an empty corpus 00:06:53.962 #2 INITED exec/s: 0 rss: 65Mb 00:06:53.962 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:53.962 This may also happen if the target rejected all inputs we tried so far 00:06:53.962 [2024-11-27 15:06:19.268540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:25250a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.962 [2024-11-27 15:06:19.268569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.962 [2024-11-27 15:06:19.268628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.962 [2024-11-27 15:06:19.268643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.962 [2024-11-27 15:06:19.268692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.962 [2024-11-27 15:06:19.268709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.480 NEW_FUNC[1/717]: 0x443ef8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:54.480 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:54.480 #8 NEW cov: 12305 ft: 12308 corp: 2/32b lim: 45 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:06:54.480 [2024-11-27 15:06:19.609651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.480 [2024-11-27 15:06:19.609706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.480 [2024-11-27 15:06:19.609786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.480 [2024-11-27 15:06:19.609812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.480 [2024-11-27 15:06:19.609886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.480 [2024-11-27 15:06:19.609912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.480 #9 NEW cov: 12420 ft: 12963 corp: 3/64b lim: 45 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 CrossOver- 00:06:54.480 [2024-11-27 15:06:19.679490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.480 [2024-11-27 15:06:19.679515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.480 [2024-11-27 15:06:19.679584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.480 [2024-11-27 15:06:19.679602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.480 [2024-11-27 15:06:19.679652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.480 [2024-11-27 15:06:19.679676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.480 #10 NEW cov: 12426 ft: 13287 corp: 4/95b lim: 45 exec/s: 0 rss: 73Mb L: 31/32 MS: 1 EraseBytes- 00:06:54.480 [2024-11-27 15:06:19.739615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.480 [2024-11-27 15:06:19.739639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.480 [2024-11-27 15:06:19.739710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.480 [2024-11-27 15:06:19.739724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.480 [2024-11-27 15:06:19.739777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.480 [2024-11-27 15:06:19.739790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.480 #11 NEW cov: 12511 ft: 13624 corp: 5/127b lim: 45 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeByte- 00:06:54.480 [2024-11-27 15:06:19.779734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.480 [2024-11-27 15:06:19.779759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.480 [2024-11-27 15:06:19.779826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.480 [2024-11-27 15:06:19.779840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.480 [2024-11-27 15:06:19.779890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.480 [2024-11-27 15:06:19.779904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.739 #12 NEW cov: 12511 ft: 13737 corp: 6/158b lim: 45 exec/s: 0 rss: 73Mb L: 31/32 MS: 1 CopyPart- 00:06:54.739 [2024-11-27 15:06:19.840065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.739 [2024-11-27 15:06:19.840089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.739 [2024-11-27 15:06:19.840158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.739 [2024-11-27 15:06:19.840172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.739 [2024-11-27 15:06:19.840223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.739 [2024-11-27 15:06:19.840236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.739 [2024-11-27 15:06:19.840289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.739 [2024-11-27 15:06:19.840303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.739 #13 NEW cov: 12511 ft: 14120 corp: 7/202b lim: 45 exec/s: 0 rss: 73Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:06:54.739 [2024-11-27 15:06:19.899917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:baba0aba cdw11:baba0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.739 [2024-11-27 15:06:19.899941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.739 [2024-11-27 15:06:19.900009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:babababa cdw11:baba0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.739 [2024-11-27 15:06:19.900025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.739 #14 NEW cov: 12511 ft: 14446 corp: 8/221b lim: 45 exec/s: 0 rss: 73Mb L: 19/44 MS: 1 InsertRepeatedBytes- 00:06:54.739 [2024-11-27 15:06:19.940310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.739 [2024-11-27 15:06:19.940334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.739 [2024-11-27 15:06:19.940402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.739 [2024-11-27 15:06:19.940416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.739 [2024-11-27 15:06:19.940469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.739 [2024-11-27 15:06:19.940482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.739 #15 NEW cov: 12511 ft: 14488 corp: 9/253b lim: 45 exec/s: 0 rss: 74Mb L: 32/44 MS: 1 ChangeByte- 00:06:54.739 [2024-11-27 15:06:20.000333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:21250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.739 [2024-11-27 15:06:20.000359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.739 [2024-11-27 15:06:20.000427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.739 [2024-11-27 15:06:20.000441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.739 [2024-11-27 15:06:20.000492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.739 [2024-11-27 15:06:20.000505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.739 #16 NEW cov: 12511 ft: 14547 corp: 10/285b lim: 45 exec/s: 0 rss: 74Mb L: 32/44 MS: 1 ChangeBit- 00:06:54.739 [2024-11-27 15:06:20.040474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.739 [2024-11-27 15:06:20.040502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.739 [2024-11-27 15:06:20.040555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.740 [2024-11-27 15:06:20.040570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.740 [2024-11-27 15:06:20.040627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.740 [2024-11-27 15:06:20.040641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.740 #17 NEW cov: 12511 ft: 14657 corp: 11/316b lim: 45 exec/s: 0 rss: 74Mb L: 31/44 MS: 1 CrossOver- 00:06:54.999 [2024-11-27 15:06:20.080647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.999 [2024-11-27 15:06:20.080678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.999 [2024-11-27 15:06:20.080731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:258e2525 cdw11:b90e0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.999 [2024-11-27 15:06:20.080745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.999 [2024-11-27 15:06:20.080797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25250000 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.999 [2024-11-27 15:06:20.080810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.999 #18 NEW cov: 12511 ft: 14759 corp: 12/347b lim: 45 exec/s: 0 rss: 74Mb L: 31/44 MS: 1 CMP- DE: "\216\271\016\364q\177\000\000"- 00:06:54.999 [2024-11-27 15:06:20.120457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b90e008e cdw11:f4710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.999 [2024-11-27 15:06:20.120488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.999 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:54.999 #23 NEW cov: 12534 ft: 15525 corp: 13/357b lim: 45 exec/s: 0 rss: 74Mb L: 10/44 MS: 5 ShuffleBytes-CopyPart-ChangeBinInt-ChangeBinInt-PersAutoDict- DE: "\216\271\016\364q\177\000\000"- 00:06:54.999 [2024-11-27 15:06:20.160826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.999 [2024-11-27 15:06:20.160852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.999 [2024-11-27 15:06:20.160919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.999 [2024-11-27 15:06:20.160945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.999 [2024-11-27 15:06:20.160994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.999 [2024-11-27 15:06:20.161008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.999 #24 NEW cov: 12534 ft: 15549 corp: 14/388b lim: 45 exec/s: 0 rss: 74Mb L: 31/44 MS: 1 ShuffleBytes- 00:06:54.999 [2024-11-27 15:06:20.220880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:baba0aba cdw11:baba0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.999 [2024-11-27 15:06:20.220906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.999 [2024-11-27 15:06:20.220959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:babababa cdw11:baba0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.999 [2024-11-27 15:06:20.220973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.999 #25 NEW cov: 12534 ft: 15634 corp: 15/408b lim: 45 exec/s: 25 rss: 74Mb L: 20/44 MS: 1 InsertByte- 00:06:54.999 [2024-11-27 15:06:20.281140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.999 [2024-11-27 15:06:20.281167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.999 [2024-11-27 15:06:20.281236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.999 [2024-11-27 15:06:20.281250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.999 [2024-11-27 15:06:20.281303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25322525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.999 [2024-11-27 15:06:20.281316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.999 #26 NEW cov: 12534 ft: 15645 corp: 16/440b lim: 45 exec/s: 26 rss: 74Mb L: 32/44 MS: 1 ShuffleBytes- 00:06:54.999 [2024-11-27 15:06:20.321264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.999 [2024-11-27 15:06:20.321289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.999 [2024-11-27 15:06:20.321341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252500 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.999 [2024-11-27 15:06:20.321358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.999 [2024-11-27 15:06:20.321413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.999 [2024-11-27 15:06:20.321426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.258 #32 NEW cov: 12534 ft: 15654 corp: 17/471b lim: 45 exec/s: 32 rss: 74Mb L: 31/44 MS: 1 ChangeByte- 00:06:55.258 [2024-11-27 15:06:20.361395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.258 [2024-11-27 15:06:20.361420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.258 [2024-11-27 15:06:20.361474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.258 [2024-11-27 15:06:20.361487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.258 [2024-11-27 15:06:20.361538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.258 [2024-11-27 15:06:20.361551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.258 #33 NEW cov: 12534 ft: 15667 corp: 18/506b lim: 45 exec/s: 33 rss: 74Mb L: 35/44 MS: 1 CopyPart- 00:06:55.258 [2024-11-27 15:06:20.421582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.258 [2024-11-27 15:06:20.421613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.258 [2024-11-27 15:06:20.421666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.258 [2024-11-27 15:06:20.421679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.258 [2024-11-27 15:06:20.421730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.258 [2024-11-27 15:06:20.421743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.258 #34 NEW cov: 12534 ft: 15686 corp: 19/537b lim: 45 exec/s: 34 rss: 74Mb L: 31/44 MS: 1 ShuffleBytes- 00:06:55.258 [2024-11-27 15:06:20.461787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.258 [2024-11-27 15:06:20.461813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.258 [2024-11-27 15:06:20.461881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f471b90e cdw11:7f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.258 [2024-11-27 15:06:20.461895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.258 [2024-11-27 15:06:20.461946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.258 [2024-11-27 15:06:20.461960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.258 [2024-11-27 15:06:20.462011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.258 [2024-11-27 15:06:20.462024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.258 #35 NEW cov: 12534 ft: 15697 corp: 20/576b lim: 45 exec/s: 35 rss: 74Mb L: 39/44 MS: 1 PersAutoDict- DE: "\216\271\016\364q\177\000\000"- 00:06:55.258 [2024-11-27 15:06:20.521790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.259 [2024-11-27 15:06:20.521815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.259 [2024-11-27 15:06:20.521867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.259 [2024-11-27 15:06:20.521883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.259 [2024-11-27 15:06:20.521933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.259 [2024-11-27 15:06:20.521948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.259 #36 NEW cov: 12534 ft: 15715 corp: 21/607b lim: 45 exec/s: 36 rss: 74Mb L: 31/44 MS: 1 CopyPart- 00:06:55.259 [2024-11-27 15:06:20.561929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.259 [2024-11-27 15:06:20.561955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.259 [2024-11-27 15:06:20.562008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.259 [2024-11-27 15:06:20.562023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.259 [2024-11-27 15:06:20.562074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.259 [2024-11-27 15:06:20.562087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.259 #37 NEW cov: 12534 ft: 15792 corp: 22/638b lim: 45 exec/s: 37 rss: 74Mb L: 31/44 MS: 1 ChangeByte- 00:06:55.518 [2024-11-27 15:06:20.602222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:252c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.518 [2024-11-27 15:06:20.602246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.518 [2024-11-27 15:06:20.602312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.518 [2024-11-27 15:06:20.602326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.518 [2024-11-27 15:06:20.602375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.518 [2024-11-27 15:06:20.602388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.518 [2024-11-27 15:06:20.602439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.518 [2024-11-27 15:06:20.602452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.518 #38 NEW cov: 12534 ft: 15817 corp: 23/682b lim: 45 exec/s: 38 rss: 74Mb L: 44/44 MS: 1 ChangeBinInt- 00:06:55.518 [2024-11-27 15:06:20.662196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.518 [2024-11-27 15:06:20.662227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.518 [2024-11-27 15:06:20.662297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.518 [2024-11-27 15:06:20.662311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.518 [2024-11-27 15:06:20.662364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.518 [2024-11-27 15:06:20.662377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.518 #39 NEW cov: 12534 ft: 15819 corp: 24/717b lim: 45 exec/s: 39 rss: 74Mb L: 35/44 MS: 1 CMP- DE: "\377\377\377\000"- 00:06:55.518 [2024-11-27 15:06:20.722339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.518 [2024-11-27 15:06:20.722363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.518 [2024-11-27 15:06:20.722432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25db2525 cdw11:dada0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.518 [2024-11-27 15:06:20.722445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.518 [2024-11-27 15:06:20.722498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:2525dada cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.518 [2024-11-27 15:06:20.722511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.518 #40 NEW cov: 12534 ft: 15857 corp: 25/748b lim: 45 exec/s: 40 rss: 74Mb L: 31/44 MS: 1 ChangeBinInt- 00:06:55.518 [2024-11-27 15:06:20.762487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.518 [2024-11-27 15:06:20.762510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.518 [2024-11-27 15:06:20.762562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:258e2525 cdw11:b90e0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.519 [2024-11-27 15:06:20.762577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.519 [2024-11-27 15:06:20.762633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25250000 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.519 [2024-11-27 15:06:20.762646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.519 #41 NEW cov: 12534 ft: 15864 corp: 26/779b lim: 45 exec/s: 41 rss: 74Mb L: 31/44 MS: 1 ChangeBit- 00:06:55.519 [2024-11-27 15:06:20.822643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.519 [2024-11-27 15:06:20.822667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.519 [2024-11-27 15:06:20.822738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25db2525 cdw11:dada0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.519 [2024-11-27 15:06:20.822752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.519 [2024-11-27 15:06:20.822803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:2525dada cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.519 [2024-11-27 15:06:20.822820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.778 #42 NEW cov: 12534 ft: 15878 corp: 27/810b lim: 45 exec/s: 42 rss: 75Mb L: 31/44 MS: 1 ChangeBit- 00:06:55.778 [2024-11-27 15:06:20.882970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.778 [2024-11-27 15:06:20.882995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.778 [2024-11-27 15:06:20.883046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25db2525 cdw11:dada0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.778 [2024-11-27 15:06:20.883061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.778 [2024-11-27 15:06:20.883112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0ef48eb9 cdw11:717f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.778 [2024-11-27 15:06:20.883124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.778 [2024-11-27 15:06:20.883175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:2525da25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.778 [2024-11-27 15:06:20.883188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.778 #43 NEW cov: 12534 ft: 15897 corp: 28/849b lim: 45 exec/s: 43 rss: 75Mb L: 39/44 MS: 1 PersAutoDict- DE: "\216\271\016\364q\177\000\000"- 00:06:55.778 [2024-11-27 15:06:20.923074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.778 [2024-11-27 15:06:20.923099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.778 [2024-11-27 15:06:20.923167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222525 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.778 [2024-11-27 15:06:20.923181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.778 [2024-11-27 15:06:20.923232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252225 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.778 [2024-11-27 15:06:20.923246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.778 [2024-11-27 15:06:20.923298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.778 [2024-11-27 15:06:20.923312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.778 #44 NEW cov: 12534 ft: 15919 corp: 29/888b lim: 45 exec/s: 44 rss: 75Mb L: 39/44 MS: 1 InsertRepeatedBytes- 00:06:55.778 [2024-11-27 15:06:20.982774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b90e008e cdw11:f4710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.778 [2024-11-27 15:06:20.982799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.778 #45 NEW cov: 12534 ft: 15940 corp: 30/902b lim: 45 exec/s: 45 rss: 75Mb L: 14/44 MS: 1 CopyPart- 00:06:55.778 [2024-11-27 15:06:21.043424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:252c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.778 [2024-11-27 15:06:21.043448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.778 [2024-11-27 15:06:21.043520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.778 [2024-11-27 15:06:21.043534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.778 [2024-11-27 15:06:21.043586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:f471b90e cdw11:7f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.778 [2024-11-27 15:06:21.043603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.778 [2024-11-27 15:06:21.043655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.779 [2024-11-27 15:06:21.043679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.779 #46 NEW cov: 12534 ft: 15964 corp: 31/946b lim: 45 exec/s: 46 rss: 75Mb L: 44/44 MS: 1 PersAutoDict- DE: "\216\271\016\364q\177\000\000"- 00:06:55.779 [2024-11-27 15:06:21.103755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:40250a25 cdw11:0a250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.779 [2024-11-27 15:06:21.103779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.779 [2024-11-27 15:06:21.103850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.779 [2024-11-27 15:06:21.103864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.779 [2024-11-27 15:06:21.103916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00002500 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.779 [2024-11-27 15:06:21.103929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.779 [2024-11-27 15:06:21.103981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.779 [2024-11-27 15:06:21.103994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.779 [2024-11-27 15:06:21.104046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.779 [2024-11-27 15:06:21.104060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:56.039 #47 NEW cov: 12534 ft: 16041 corp: 32/991b lim: 45 exec/s: 47 rss: 75Mb L: 45/45 MS: 1 InsertByte- 00:06:56.039 [2024-11-27 15:06:21.143749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.039 [2024-11-27 15:06:21.143773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.039 [2024-11-27 15:06:21.143844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.039 [2024-11-27 15:06:21.143859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.039 [2024-11-27 15:06:21.143912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.039 [2024-11-27 15:06:21.143925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.039 [2024-11-27 15:06:21.143985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.039 [2024-11-27 15:06:21.143998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.039 #48 NEW cov: 12534 ft: 16047 corp: 33/1028b lim: 45 exec/s: 48 rss: 75Mb L: 37/45 MS: 1 CopyPart- 00:06:56.039 [2024-11-27 15:06:21.183719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.039 [2024-11-27 15:06:21.183743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.039 [2024-11-27 15:06:21.183811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.039 [2024-11-27 15:06:21.183825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.039 [2024-11-27 15:06:21.183876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.039 [2024-11-27 15:06:21.183889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.039 #49 NEW cov: 12534 ft: 16071 corp: 34/1059b lim: 45 exec/s: 49 rss: 75Mb L: 31/45 MS: 1 ShuffleBytes- 00:06:56.039 [2024-11-27 15:06:21.223784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a0a25 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.039 [2024-11-27 15:06:21.223809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.039 [2024-11-27 15:06:21.223862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:25252500 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.039 [2024-11-27 15:06:21.223876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.039 [2024-11-27 15:06:21.223928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:25252525 cdw11:25250001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.039 [2024-11-27 15:06:21.223942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.039 #50 NEW cov: 12534 ft: 16136 corp: 35/1090b lim: 45 exec/s: 25 rss: 75Mb L: 31/45 MS: 1 ChangeBit- 00:06:56.039 #50 DONE cov: 12534 ft: 16136 corp: 35/1090b lim: 45 exec/s: 25 rss: 75Mb 00:06:56.039 ###### Recommended dictionary. ###### 00:06:56.039 "\216\271\016\364q\177\000\000" # Uses: 4 00:06:56.039 "\377\377\377\000" # Uses: 0 00:06:56.039 ###### End of recommended dictionary. ###### 00:06:56.039 Done 50 runs in 2 second(s) 00:06:56.039 15:06:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:56.039 15:06:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:56.040 15:06:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:56.040 15:06:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:56.040 15:06:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:56.040 15:06:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:56.040 15:06:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:56.040 15:06:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:56.040 15:06:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:56.040 15:06:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:56.040 15:06:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:56.040 15:06:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:56.040 15:06:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:56.040 15:06:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:56.040 15:06:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:56.040 15:06:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:56.299 15:06:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:56.299 15:06:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:56.299 15:06:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:56.299 [2024-11-27 15:06:21.395631] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:56.299 [2024-11-27 15:06:21.395686] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2367874 ] 00:06:56.299 [2024-11-27 15:06:21.579427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.299 [2024-11-27 15:06:21.610917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.558 [2024-11-27 15:06:21.670514] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.558 [2024-11-27 15:06:21.686887] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:56.558 INFO: Running with entropic power schedule (0xFF, 100). 00:06:56.558 INFO: Seed: 1809626213 00:06:56.558 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:06:56.558 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:06:56.558 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:56.558 INFO: A corpus is not provided, starting from an empty corpus 00:06:56.558 #2 INITED exec/s: 0 rss: 67Mb 00:06:56.558 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:56.558 This may also happen if the target rejected all inputs we tried so far 00:06:56.558 [2024-11-27 15:06:21.756199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e0e0 cdw11:00000000 00:06:56.558 [2024-11-27 15:06:21.756237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.817 NEW_FUNC[1/715]: 0x446708 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:56.817 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:56.817 #4 NEW cov: 12224 ft: 12211 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 2 ChangeByte-CopyPart- 00:06:56.817 [2024-11-27 15:06:22.107196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a31 cdw11:00000000 00:06:56.817 [2024-11-27 15:06:22.107238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.817 #6 NEW cov: 12337 ft: 12849 corp: 3/5b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 2 ChangeBit-InsertByte- 00:06:57.076 [2024-11-27 15:06:22.157737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:57.076 [2024-11-27 15:06:22.157764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.076 [2024-11-27 15:06:22.157883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.076 [2024-11-27 15:06:22.157902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.076 #7 NEW cov: 12343 ft: 13281 corp: 4/10b lim: 10 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CMP- DE: "\001\000\000\000"- 00:06:57.076 [2024-11-27 15:06:22.208256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:57.076 [2024-11-27 15:06:22.208283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.076 [2024-11-27 15:06:22.208413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.076 [2024-11-27 15:06:22.208430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.076 [2024-11-27 15:06:22.208558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000100 cdw11:00000000 00:06:57.076 [2024-11-27 15:06:22.208575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.076 [2024-11-27 15:06:22.208702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.076 [2024-11-27 15:06:22.208720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.076 #8 NEW cov: 12428 ft: 13774 corp: 5/19b lim: 10 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 CopyPart- 00:06:57.076 [2024-11-27 15:06:22.277756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e00a cdw11:00000000 00:06:57.076 [2024-11-27 15:06:22.277784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.076 #10 NEW cov: 12428 ft: 14005 corp: 6/21b lim: 10 exec/s: 0 rss: 73Mb L: 2/9 MS: 2 ShuffleBytes-CrossOver- 00:06:57.076 [2024-11-27 15:06:22.328389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e0e0 cdw11:00000000 00:06:57.076 [2024-11-27 15:06:22.328417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.076 [2024-11-27 15:06:22.328547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 00:06:57.076 [2024-11-27 15:06:22.328567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.076 [2024-11-27 15:06:22.328701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.076 [2024-11-27 15:06:22.328720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.076 #11 NEW cov: 12428 ft: 14233 corp: 7/27b lim: 10 exec/s: 0 rss: 73Mb L: 6/9 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:06:57.076 [2024-11-27 15:06:22.398104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a31 cdw11:00000000 00:06:57.076 [2024-11-27 15:06:22.398131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.336 #12 NEW cov: 12428 ft: 14301 corp: 8/29b lim: 10 exec/s: 0 rss: 73Mb L: 2/9 MS: 1 ShuffleBytes- 00:06:57.336 [2024-11-27 15:06:22.469340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a00 cdw11:00000000 00:06:57.336 [2024-11-27 15:06:22.469365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.336 [2024-11-27 15:06:22.469489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.336 [2024-11-27 15:06:22.469508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.336 [2024-11-27 15:06:22.469627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.336 [2024-11-27 15:06:22.469646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.336 [2024-11-27 15:06:22.469765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.336 [2024-11-27 15:06:22.469785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.336 [2024-11-27 15:06:22.469917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000031 cdw11:00000000 00:06:57.336 [2024-11-27 15:06:22.469935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:57.336 #13 NEW cov: 12428 ft: 14421 corp: 9/39b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:57.336 [2024-11-27 15:06:22.539149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e029 cdw11:00000000 00:06:57.336 [2024-11-27 15:06:22.539177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.336 [2024-11-27 15:06:22.539309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 00:06:57.336 [2024-11-27 15:06:22.539327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.336 [2024-11-27 15:06:22.539452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.336 [2024-11-27 15:06:22.539471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.336 #14 NEW cov: 12428 ft: 14469 corp: 10/45b lim: 10 exec/s: 0 rss: 74Mb L: 6/10 MS: 1 ChangeByte- 00:06:57.336 [2024-11-27 15:06:22.609281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e0e0 cdw11:00000000 00:06:57.336 [2024-11-27 15:06:22.609309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.336 [2024-11-27 15:06:22.609428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 00:06:57.336 [2024-11-27 15:06:22.609447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.336 [2024-11-27 15:06:22.609565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.337 [2024-11-27 15:06:22.609583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.337 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:57.337 #15 NEW cov: 12451 ft: 14538 corp: 11/51b lim: 10 exec/s: 0 rss: 74Mb L: 6/10 MS: 1 CopyPart- 00:06:57.337 [2024-11-27 15:06:22.659320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e0e0 cdw11:00000000 00:06:57.337 [2024-11-27 15:06:22.659348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.337 [2024-11-27 15:06:22.659493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000107 cdw11:00000000 00:06:57.337 [2024-11-27 15:06:22.659511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.337 [2024-11-27 15:06:22.659644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.337 [2024-11-27 15:06:22.659664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.597 #16 NEW cov: 12451 ft: 14553 corp: 12/57b lim: 10 exec/s: 0 rss: 74Mb L: 6/10 MS: 1 ChangeBinInt- 00:06:57.597 [2024-11-27 15:06:22.729898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:57.597 [2024-11-27 15:06:22.729926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.597 [2024-11-27 15:06:22.730047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.597 [2024-11-27 15:06:22.730064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.597 [2024-11-27 15:06:22.730183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000100 cdw11:00000000 00:06:57.597 [2024-11-27 15:06:22.730201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.597 [2024-11-27 15:06:22.730327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.597 [2024-11-27 15:06:22.730346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.597 #17 NEW cov: 12451 ft: 14575 corp: 13/66b lim: 10 exec/s: 17 rss: 74Mb L: 9/10 MS: 1 CMP- DE: "\000\000"- 00:06:57.597 [2024-11-27 15:06:22.800371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a00 cdw11:00000000 00:06:57.597 [2024-11-27 15:06:22.800397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.597 [2024-11-27 15:06:22.800522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000029 cdw11:00000000 00:06:57.597 [2024-11-27 15:06:22.800542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.597 [2024-11-27 15:06:22.800680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.597 [2024-11-27 15:06:22.800697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.597 [2024-11-27 15:06:22.800822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.597 [2024-11-27 15:06:22.800838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.597 [2024-11-27 15:06:22.800964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000031 cdw11:00000000 00:06:57.597 [2024-11-27 15:06:22.800980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:57.597 #18 NEW cov: 12451 ft: 14617 corp: 14/76b lim: 10 exec/s: 18 rss: 74Mb L: 10/10 MS: 1 ChangeByte- 00:06:57.597 [2024-11-27 15:06:22.869937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e0e9 cdw11:00000000 00:06:57.597 [2024-11-27 15:06:22.869966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.597 [2024-11-27 15:06:22.870099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e9e9 cdw11:00000000 00:06:57.597 [2024-11-27 15:06:22.870118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.597 #19 NEW cov: 12451 ft: 14627 corp: 15/81b lim: 10 exec/s: 19 rss: 74Mb L: 5/10 MS: 1 InsertRepeatedBytes- 00:06:57.857 [2024-11-27 15:06:22.940283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e0e0 cdw11:00000000 00:06:57.857 [2024-11-27 15:06:22.940313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.857 [2024-11-27 15:06:22.940454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00005b07 cdw11:00000000 00:06:57.857 [2024-11-27 15:06:22.940474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.857 [2024-11-27 15:06:22.940602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.857 [2024-11-27 15:06:22.940620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.857 #20 NEW cov: 12451 ft: 14628 corp: 16/87b lim: 10 exec/s: 20 rss: 74Mb L: 6/10 MS: 1 ChangeByte- 00:06:57.857 [2024-11-27 15:06:23.009957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000700 cdw11:00000000 00:06:57.857 [2024-11-27 15:06:23.009986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.857 #21 NEW cov: 12451 ft: 14652 corp: 17/90b lim: 10 exec/s: 21 rss: 74Mb L: 3/10 MS: 1 EraseBytes- 00:06:57.857 [2024-11-27 15:06:23.060689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a00 cdw11:00000000 00:06:57.857 [2024-11-27 15:06:23.060718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.857 [2024-11-27 15:06:23.060839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.857 [2024-11-27 15:06:23.060856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.857 [2024-11-27 15:06:23.060987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.857 [2024-11-27 15:06:23.061007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.857 #22 NEW cov: 12451 ft: 14680 corp: 18/97b lim: 10 exec/s: 22 rss: 74Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:06:57.857 [2024-11-27 15:06:23.110921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a2c cdw11:00000000 00:06:57.857 [2024-11-27 15:06:23.110949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.857 [2024-11-27 15:06:23.111058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.857 [2024-11-27 15:06:23.111076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.857 [2024-11-27 15:06:23.111206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.857 [2024-11-27 15:06:23.111224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.857 #23 NEW cov: 12451 ft: 14741 corp: 19/104b lim: 10 exec/s: 23 rss: 74Mb L: 7/10 MS: 1 ChangeByte- 00:06:57.857 [2024-11-27 15:06:23.180629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00006631 cdw11:00000000 00:06:57.857 [2024-11-27 15:06:23.180657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.117 #24 NEW cov: 12451 ft: 14827 corp: 20/106b lim: 10 exec/s: 24 rss: 74Mb L: 2/10 MS: 1 ChangeByte- 00:06:58.117 [2024-11-27 15:06:23.231659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a00 cdw11:00000000 00:06:58.117 [2024-11-27 15:06:23.231686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.117 [2024-11-27 15:06:23.231811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000029 cdw11:00000000 00:06:58.117 [2024-11-27 15:06:23.231831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.117 [2024-11-27 15:06:23.231960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.117 [2024-11-27 15:06:23.231978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.117 [2024-11-27 15:06:23.232101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.117 [2024-11-27 15:06:23.232120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.117 [2024-11-27 15:06:23.232234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000030 cdw11:00000000 00:06:58.117 [2024-11-27 15:06:23.232252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:58.117 #25 NEW cov: 12451 ft: 14837 corp: 21/116b lim: 10 exec/s: 25 rss: 75Mb L: 10/10 MS: 1 ChangeASCIIInt- 00:06:58.117 [2024-11-27 15:06:23.302072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:06:58.117 [2024-11-27 15:06:23.302100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.117 [2024-11-27 15:06:23.302222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.117 [2024-11-27 15:06:23.302242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.117 [2024-11-27 15:06:23.302366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000100 cdw11:00000000 00:06:58.117 [2024-11-27 15:06:23.302384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.117 [2024-11-27 15:06:23.302504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.117 [2024-11-27 15:06:23.302523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.117 [2024-11-27 15:06:23.302647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000a20a cdw11:00000000 00:06:58.117 [2024-11-27 15:06:23.302667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:58.117 #26 NEW cov: 12451 ft: 14847 corp: 22/126b lim: 10 exec/s: 26 rss: 75Mb L: 10/10 MS: 1 InsertByte- 00:06:58.117 [2024-11-27 15:06:23.351159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a8a cdw11:00000000 00:06:58.117 [2024-11-27 15:06:23.351187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.117 #27 NEW cov: 12451 ft: 14864 corp: 23/129b lim: 10 exec/s: 27 rss: 75Mb L: 3/10 MS: 1 CopyPart- 00:06:58.117 [2024-11-27 15:06:23.401452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a8a cdw11:00000000 00:06:58.117 [2024-11-27 15:06:23.401480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.117 #28 NEW cov: 12451 ft: 14872 corp: 24/131b lim: 10 exec/s: 28 rss: 75Mb L: 2/10 MS: 1 EraseBytes- 00:06:58.383 [2024-11-27 15:06:23.472377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e0e0 cdw11:00000000 00:06:58.383 [2024-11-27 15:06:23.472406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.383 [2024-11-27 15:06:23.472536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000107 cdw11:00000000 00:06:58.383 [2024-11-27 15:06:23.472558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.383 [2024-11-27 15:06:23.472686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.383 [2024-11-27 15:06:23.472705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.383 [2024-11-27 15:06:23.472831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.383 [2024-11-27 15:06:23.472846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.383 #29 NEW cov: 12451 ft: 14909 corp: 25/139b lim: 10 exec/s: 29 rss: 75Mb L: 8/10 MS: 1 PersAutoDict- DE: "\000\000"- 00:06:58.383 [2024-11-27 15:06:23.522495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e0e0 cdw11:00000000 00:06:58.383 [2024-11-27 15:06:23.522523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.383 [2024-11-27 15:06:23.522668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 00:06:58.383 [2024-11-27 15:06:23.522688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.383 [2024-11-27 15:06:23.522809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.383 [2024-11-27 15:06:23.522828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.383 [2024-11-27 15:06:23.522957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.383 [2024-11-27 15:06:23.522976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.383 #30 NEW cov: 12451 ft: 14914 corp: 26/148b lim: 10 exec/s: 30 rss: 75Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:06:58.383 [2024-11-27 15:06:23.572840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e0e0 cdw11:00000000 00:06:58.383 [2024-11-27 15:06:23.572867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.383 [2024-11-27 15:06:23.572996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 00:06:58.383 [2024-11-27 15:06:23.573015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.384 [2024-11-27 15:06:23.573138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.384 [2024-11-27 15:06:23.573155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.384 [2024-11-27 15:06:23.573281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.384 [2024-11-27 15:06:23.573298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.384 [2024-11-27 15:06:23.573421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.384 [2024-11-27 15:06:23.573438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:58.384 #31 NEW cov: 12451 ft: 14921 corp: 27/158b lim: 10 exec/s: 31 rss: 75Mb L: 10/10 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:58.384 [2024-11-27 15:06:23.622106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000700 cdw11:00000000 00:06:58.384 [2024-11-27 15:06:23.622136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.384 #32 NEW cov: 12451 ft: 14938 corp: 28/161b lim: 10 exec/s: 32 rss: 75Mb L: 3/10 MS: 1 CopyPart- 00:06:58.384 [2024-11-27 15:06:23.692561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a00 cdw11:00000000 00:06:58.384 [2024-11-27 15:06:23.692590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.384 [2024-11-27 15:06:23.692716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:58.384 [2024-11-27 15:06:23.692735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.384 #33 NEW cov: 12451 ft: 14966 corp: 29/166b lim: 10 exec/s: 33 rss: 75Mb L: 5/10 MS: 1 EraseBytes- 00:06:58.645 [2024-11-27 15:06:23.742727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:58.645 [2024-11-27 15:06:23.742755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.645 [2024-11-27 15:06:23.742880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:58.645 [2024-11-27 15:06:23.742898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.645 #34 NEW cov: 12451 ft: 14980 corp: 30/171b lim: 10 exec/s: 17 rss: 75Mb L: 5/10 MS: 1 InsertRepeatedBytes- 00:06:58.645 #34 DONE cov: 12451 ft: 14980 corp: 30/171b lim: 10 exec/s: 17 rss: 75Mb 00:06:58.645 ###### Recommended dictionary. ###### 00:06:58.645 "\001\000\000\000" # Uses: 1 00:06:58.645 "\000\000" # Uses: 1 00:06:58.645 "\001\000\000\000\000\000\000\000" # Uses: 0 00:06:58.645 ###### End of recommended dictionary. ###### 00:06:58.645 Done 34 runs in 2 second(s) 00:06:58.645 15:06:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:58.645 15:06:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:58.645 15:06:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:58.645 15:06:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:58.645 15:06:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:58.645 15:06:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:58.645 15:06:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:58.645 15:06:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:58.645 15:06:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:58.645 15:06:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:58.645 15:06:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:58.645 15:06:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:58.645 15:06:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:58.645 15:06:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:58.645 15:06:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:58.645 15:06:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:58.645 15:06:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:58.645 15:06:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:58.645 15:06:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:58.645 [2024-11-27 15:06:23.909290] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:06:58.645 [2024-11-27 15:06:23.909379] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2368403 ] 00:06:58.906 [2024-11-27 15:06:24.093895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.906 [2024-11-27 15:06:24.131627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.906 [2024-11-27 15:06:24.190891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.906 [2024-11-27 15:06:24.207214] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:58.906 INFO: Running with entropic power schedule (0xFF, 100). 00:06:58.906 INFO: Seed: 36648331 00:06:58.906 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:06:58.906 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:06:58.906 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:58.906 INFO: A corpus is not provided, starting from an empty corpus 00:06:58.906 #2 INITED exec/s: 0 rss: 66Mb 00:06:58.906 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:58.906 This may also happen if the target rejected all inputs we tried so far 00:06:59.166 [2024-11-27 15:06:24.252803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.166 [2024-11-27 15:06:24.252835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.166 [2024-11-27 15:06:24.252895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.166 [2024-11-27 15:06:24.252912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.166 [2024-11-27 15:06:24.252973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000023 cdw11:00000000 00:06:59.166 [2024-11-27 15:06:24.252992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.425 NEW_FUNC[1/715]: 0x447108 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:59.426 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:59.426 #6 NEW cov: 12220 ft: 12223 corp: 2/7b lim: 10 exec/s: 0 rss: 73Mb L: 6/6 MS: 4 ChangeByte-ChangeBit-ChangeByte-InsertRepeatedBytes- 00:06:59.426 [2024-11-27 15:06:24.583780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.426 [2024-11-27 15:06:24.583814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.426 [2024-11-27 15:06:24.583877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.426 [2024-11-27 15:06:24.583895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.426 [2024-11-27 15:06:24.583957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.426 [2024-11-27 15:06:24.583976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.426 [2024-11-27 15:06:24.584038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.426 [2024-11-27 15:06:24.584061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.426 #7 NEW cov: 12337 ft: 12957 corp: 3/16b lim: 10 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 CrossOver- 00:06:59.426 [2024-11-27 15:06:24.643633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.426 [2024-11-27 15:06:24.643660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.426 [2024-11-27 15:06:24.643720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.426 [2024-11-27 15:06:24.643739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.426 #8 NEW cov: 12343 ft: 13429 corp: 4/20b lim: 10 exec/s: 0 rss: 73Mb L: 4/9 MS: 1 EraseBytes- 00:06:59.426 [2024-11-27 15:06:24.683836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.426 [2024-11-27 15:06:24.683863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.426 [2024-11-27 15:06:24.683924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.426 [2024-11-27 15:06:24.683942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.426 [2024-11-27 15:06:24.684001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.426 [2024-11-27 15:06:24.684021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.426 #9 NEW cov: 12428 ft: 13676 corp: 5/26b lim: 10 exec/s: 0 rss: 73Mb L: 6/9 MS: 1 EraseBytes- 00:06:59.426 [2024-11-27 15:06:24.743739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.426 [2024-11-27 15:06:24.743765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.685 #10 NEW cov: 12428 ft: 14089 corp: 6/29b lim: 10 exec/s: 0 rss: 73Mb L: 3/9 MS: 1 EraseBytes- 00:06:59.685 [2024-11-27 15:06:24.783853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.685 [2024-11-27 15:06:24.783880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.685 #11 NEW cov: 12428 ft: 14221 corp: 7/32b lim: 10 exec/s: 0 rss: 73Mb L: 3/9 MS: 1 EraseBytes- 00:06:59.685 [2024-11-27 15:06:24.844254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b00 cdw11:00000000 00:06:59.685 [2024-11-27 15:06:24.844281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.685 [2024-11-27 15:06:24.844343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.685 [2024-11-27 15:06:24.844361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.685 [2024-11-27 15:06:24.844422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.685 [2024-11-27 15:06:24.844442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.685 #12 NEW cov: 12428 ft: 14251 corp: 8/38b lim: 10 exec/s: 0 rss: 74Mb L: 6/9 MS: 1 ChangeByte- 00:06:59.685 [2024-11-27 15:06:24.904309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.685 [2024-11-27 15:06:24.904335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.685 [2024-11-27 15:06:24.904403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.685 [2024-11-27 15:06:24.904422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.685 #13 NEW cov: 12428 ft: 14299 corp: 9/42b lim: 10 exec/s: 0 rss: 74Mb L: 4/9 MS: 1 CopyPart- 00:06:59.685 [2024-11-27 15:06:24.964467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.685 [2024-11-27 15:06:24.964494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.685 [2024-11-27 15:06:24.964555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.685 [2024-11-27 15:06:24.964574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.685 #14 NEW cov: 12428 ft: 14359 corp: 10/46b lim: 10 exec/s: 0 rss: 74Mb L: 4/9 MS: 1 ShuffleBytes- 00:06:59.685 [2024-11-27 15:06:25.004648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b00 cdw11:00000000 00:06:59.685 [2024-11-27 15:06:25.004675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.685 [2024-11-27 15:06:25.004736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.685 [2024-11-27 15:06:25.004755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.685 [2024-11-27 15:06:25.004817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002c00 cdw11:00000000 00:06:59.685 [2024-11-27 15:06:25.004838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.944 #15 NEW cov: 12428 ft: 14400 corp: 11/53b lim: 10 exec/s: 0 rss: 74Mb L: 7/9 MS: 1 InsertByte- 00:06:59.944 [2024-11-27 15:06:25.064650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.944 [2024-11-27 15:06:25.064677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.944 #16 NEW cov: 12428 ft: 14471 corp: 12/55b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 1 EraseBytes- 00:06:59.944 [2024-11-27 15:06:25.124813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000800 cdw11:00000000 00:06:59.944 [2024-11-27 15:06:25.124840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.944 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:06:59.944 #17 NEW cov: 12451 ft: 14531 corp: 13/57b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 1 ChangeBit- 00:06:59.944 [2024-11-27 15:06:25.185191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.944 [2024-11-27 15:06:25.185218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.944 [2024-11-27 15:06:25.185283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.944 [2024-11-27 15:06:25.185302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.944 [2024-11-27 15:06:25.185364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000023 cdw11:00000000 00:06:59.944 [2024-11-27 15:06:25.185383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.944 #18 NEW cov: 12451 ft: 14548 corp: 14/63b lim: 10 exec/s: 0 rss: 74Mb L: 6/9 MS: 1 CopyPart- 00:06:59.944 [2024-11-27 15:06:25.225322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 00:06:59.944 [2024-11-27 15:06:25.225350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.944 [2024-11-27 15:06:25.225412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:59.944 [2024-11-27 15:06:25.225431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.944 [2024-11-27 15:06:25.225507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000023 cdw11:00000000 00:06:59.945 [2024-11-27 15:06:25.225527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.945 #19 NEW cov: 12451 ft: 14574 corp: 15/69b lim: 10 exec/s: 19 rss: 74Mb L: 6/9 MS: 1 ChangeBit- 00:07:00.204 [2024-11-27 15:06:25.285375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.204 [2024-11-27 15:06:25.285403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.204 [2024-11-27 15:06:25.285466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000040 cdw11:00000000 00:07:00.204 [2024-11-27 15:06:25.285485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.204 #20 NEW cov: 12451 ft: 14598 corp: 16/73b lim: 10 exec/s: 20 rss: 74Mb L: 4/9 MS: 1 ChangeBit- 00:07:00.204 [2024-11-27 15:06:25.325818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001a1a cdw11:00000000 00:07:00.204 [2024-11-27 15:06:25.325846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.204 [2024-11-27 15:06:25.325909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001a1a cdw11:00000000 00:07:00.204 [2024-11-27 15:06:25.325928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.204 [2024-11-27 15:06:25.325992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.204 [2024-11-27 15:06:25.326013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.204 [2024-11-27 15:06:25.326076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.204 [2024-11-27 15:06:25.326093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.204 [2024-11-27 15:06:25.326155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.204 [2024-11-27 15:06:25.326172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.204 #21 NEW cov: 12451 ft: 14637 corp: 17/83b lim: 10 exec/s: 21 rss: 74Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:00.204 [2024-11-27 15:06:25.365425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000000ce cdw11:00000000 00:07:00.204 [2024-11-27 15:06:25.365453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.204 #22 NEW cov: 12451 ft: 14646 corp: 18/86b lim: 10 exec/s: 22 rss: 74Mb L: 3/10 MS: 1 ChangeByte- 00:07:00.204 [2024-11-27 15:06:25.405562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.204 [2024-11-27 15:06:25.405588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.204 #23 NEW cov: 12451 ft: 14687 corp: 19/88b lim: 10 exec/s: 23 rss: 74Mb L: 2/10 MS: 1 EraseBytes- 00:07:00.204 [2024-11-27 15:06:25.466233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b00 cdw11:00000000 00:07:00.204 [2024-11-27 15:06:25.466260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.205 [2024-11-27 15:06:25.466322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.205 [2024-11-27 15:06:25.466341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.205 [2024-11-27 15:06:25.466402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.205 [2024-11-27 15:06:25.466422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.205 [2024-11-27 15:06:25.466484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000002c cdw11:00000000 00:07:00.205 [2024-11-27 15:06:25.466499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.205 [2024-11-27 15:06:25.466562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.205 [2024-11-27 15:06:25.466579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.205 #24 NEW cov: 12451 ft: 14711 corp: 20/98b lim: 10 exec/s: 24 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:07:00.205 [2024-11-27 15:06:25.526226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00007b00 cdw11:00000000 00:07:00.205 [2024-11-27 15:06:25.526252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.205 [2024-11-27 15:06:25.526315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.205 [2024-11-27 15:06:25.526334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.205 [2024-11-27 15:06:25.526397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.205 [2024-11-27 15:06:25.526415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.464 #25 NEW cov: 12451 ft: 14721 corp: 21/104b lim: 10 exec/s: 25 rss: 74Mb L: 6/10 MS: 1 ChangeBit- 00:07:00.464 [2024-11-27 15:06:25.566164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.464 [2024-11-27 15:06:25.566191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.464 [2024-11-27 15:06:25.566253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000fe cdw11:00000000 00:07:00.464 [2024-11-27 15:06:25.566273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.464 #26 NEW cov: 12451 ft: 14738 corp: 22/108b lim: 10 exec/s: 26 rss: 74Mb L: 4/10 MS: 1 ChangeBinInt- 00:07:00.464 [2024-11-27 15:06:25.606388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00007b00 cdw11:00000000 00:07:00.464 [2024-11-27 15:06:25.606414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.464 [2024-11-27 15:06:25.606477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.464 [2024-11-27 15:06:25.606495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.464 [2024-11-27 15:06:25.606559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.464 [2024-11-27 15:06:25.606576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.464 #27 NEW cov: 12451 ft: 14825 corp: 23/115b lim: 10 exec/s: 27 rss: 75Mb L: 7/10 MS: 1 CrossOver- 00:07:00.464 [2024-11-27 15:06:25.666553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.464 [2024-11-27 15:06:25.666580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.464 [2024-11-27 15:06:25.666648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004000 cdw11:00000000 00:07:00.465 [2024-11-27 15:06:25.666668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.465 [2024-11-27 15:06:25.666732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.465 [2024-11-27 15:06:25.666750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.465 #28 NEW cov: 12451 ft: 14842 corp: 24/122b lim: 10 exec/s: 28 rss: 75Mb L: 7/10 MS: 1 CrossOver- 00:07:00.465 [2024-11-27 15:06:25.726771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b00 cdw11:00000000 00:07:00.465 [2024-11-27 15:06:25.726797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.465 [2024-11-27 15:06:25.726858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.465 [2024-11-27 15:06:25.726878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.465 [2024-11-27 15:06:25.726940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.465 [2024-11-27 15:06:25.726959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.465 #29 NEW cov: 12451 ft: 14921 corp: 25/128b lim: 10 exec/s: 29 rss: 75Mb L: 6/10 MS: 1 ShuffleBytes- 00:07:00.465 [2024-11-27 15:06:25.766926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 00:07:00.465 [2024-11-27 15:06:25.766952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.465 [2024-11-27 15:06:25.767012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000030 cdw11:00000000 00:07:00.465 [2024-11-27 15:06:25.767031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.465 [2024-11-27 15:06:25.767092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.465 [2024-11-27 15:06:25.767111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.724 #30 NEW cov: 12451 ft: 14931 corp: 26/135b lim: 10 exec/s: 30 rss: 75Mb L: 7/10 MS: 1 InsertByte- 00:07:00.724 [2024-11-27 15:06:25.826991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000000fe cdw11:00000000 00:07:00.724 [2024-11-27 15:06:25.827018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.724 [2024-11-27 15:06:25.827083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.724 [2024-11-27 15:06:25.827102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.724 #31 NEW cov: 12451 ft: 14937 corp: 27/139b lim: 10 exec/s: 31 rss: 75Mb L: 4/10 MS: 1 ShuffleBytes- 00:07:00.724 [2024-11-27 15:06:25.887131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.724 [2024-11-27 15:06:25.887157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.724 [2024-11-27 15:06:25.887219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000028 cdw11:00000000 00:07:00.724 [2024-11-27 15:06:25.887237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.724 #32 NEW cov: 12451 ft: 14945 corp: 28/144b lim: 10 exec/s: 32 rss: 75Mb L: 5/10 MS: 1 InsertByte- 00:07:00.724 [2024-11-27 15:06:25.927472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b00 cdw11:00000000 00:07:00.724 [2024-11-27 15:06:25.927499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.724 [2024-11-27 15:06:25.927564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.724 [2024-11-27 15:06:25.927584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.725 [2024-11-27 15:06:25.927662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008181 cdw11:00000000 00:07:00.725 [2024-11-27 15:06:25.927683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.725 [2024-11-27 15:06:25.927744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00008100 cdw11:00000000 00:07:00.725 [2024-11-27 15:06:25.927764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.725 #33 NEW cov: 12451 ft: 14953 corp: 29/153b lim: 10 exec/s: 33 rss: 75Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:00.725 [2024-11-27 15:06:25.987472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.725 [2024-11-27 15:06:25.987498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.725 [2024-11-27 15:06:25.987560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.725 [2024-11-27 15:06:25.987579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.725 [2024-11-27 15:06:25.987645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.725 [2024-11-27 15:06:25.987665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.725 #34 NEW cov: 12451 ft: 14957 corp: 30/159b lim: 10 exec/s: 34 rss: 75Mb L: 6/10 MS: 1 CopyPart- 00:07:00.725 [2024-11-27 15:06:26.027727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b00 cdw11:00000000 00:07:00.725 [2024-11-27 15:06:26.027754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.725 [2024-11-27 15:06:26.027828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.725 [2024-11-27 15:06:26.027847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.725 [2024-11-27 15:06:26.027909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008181 cdw11:00000000 00:07:00.725 [2024-11-27 15:06:26.027928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.725 [2024-11-27 15:06:26.027993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00008130 cdw11:00000000 00:07:00.725 [2024-11-27 15:06:26.028009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.989 #35 NEW cov: 12451 ft: 14970 corp: 31/168b lim: 10 exec/s: 35 rss: 75Mb L: 9/10 MS: 1 ChangeByte- 00:07:00.989 [2024-11-27 15:06:26.087501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00007e00 cdw11:00000000 00:07:00.989 [2024-11-27 15:06:26.087529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.989 #36 NEW cov: 12451 ft: 14990 corp: 32/171b lim: 10 exec/s: 36 rss: 75Mb L: 3/10 MS: 1 ChangeByte- 00:07:00.989 [2024-11-27 15:06:26.147690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000080 cdw11:00000000 00:07:00.989 [2024-11-27 15:06:26.147717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.989 #37 NEW cov: 12451 ft: 14997 corp: 33/173b lim: 10 exec/s: 37 rss: 75Mb L: 2/10 MS: 1 ChangeBit- 00:07:00.989 [2024-11-27 15:06:26.187790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00007e00 cdw11:00000000 00:07:00.989 [2024-11-27 15:06:26.187817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.989 #38 NEW cov: 12451 ft: 15000 corp: 34/176b lim: 10 exec/s: 38 rss: 75Mb L: 3/10 MS: 1 ChangeByte- 00:07:00.989 [2024-11-27 15:06:26.248109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.989 [2024-11-27 15:06:26.248136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.989 [2024-11-27 15:06:26.248202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000040 cdw11:00000000 00:07:00.989 [2024-11-27 15:06:26.248221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.989 #39 NEW cov: 12451 ft: 15006 corp: 35/180b lim: 10 exec/s: 19 rss: 75Mb L: 4/10 MS: 1 ShuffleBytes- 00:07:00.990 #39 DONE cov: 12451 ft: 15006 corp: 35/180b lim: 10 exec/s: 19 rss: 75Mb 00:07:00.990 Done 39 runs in 2 second(s) 00:07:01.249 15:06:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:07:01.249 15:06:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:01.249 15:06:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:01.249 15:06:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:01.249 15:06:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:01.249 15:06:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:01.249 15:06:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:01.249 15:06:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:01.249 15:06:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:01.249 15:06:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:01.249 15:06:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:01.249 15:06:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:07:01.249 15:06:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:07:01.249 15:06:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:01.249 15:06:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:01.249 15:06:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:01.249 15:06:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:01.249 15:06:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:01.250 15:06:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:07:01.250 [2024-11-27 15:06:26.444243] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:01.250 [2024-11-27 15:06:26.444331] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2368850 ] 00:07:01.606 [2024-11-27 15:06:26.630909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.606 [2024-11-27 15:06:26.666221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.606 [2024-11-27 15:06:26.726271] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.606 [2024-11-27 15:06:26.742625] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:01.606 INFO: Running with entropic power schedule (0xFF, 100). 00:07:01.606 INFO: Seed: 2572646516 00:07:01.606 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:07:01.606 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:07:01.606 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:01.606 INFO: A corpus is not provided, starting from an empty corpus 00:07:01.606 [2024-11-27 15:06:26.798003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.606 [2024-11-27 15:06:26.798033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.606 #2 INITED cov: 12251 ft: 12249 corp: 1/1b exec/s: 0 rss: 71Mb 00:07:01.606 [2024-11-27 15:06:26.838190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.606 [2024-11-27 15:06:26.838216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.606 [2024-11-27 15:06:26.838270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.606 [2024-11-27 15:06:26.838284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.606 #3 NEW cov: 12364 ft: 13564 corp: 2/3b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 InsertByte- 00:07:01.606 [2024-11-27 15:06:26.898366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.606 [2024-11-27 15:06:26.898392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.606 [2024-11-27 15:06:26.898448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.606 [2024-11-27 15:06:26.898462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.932 #4 NEW cov: 12370 ft: 13708 corp: 3/5b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ChangeByte- 00:07:01.932 [2024-11-27 15:06:26.958482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.932 [2024-11-27 15:06:26.958512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.932 [2024-11-27 15:06:26.958568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.932 [2024-11-27 15:06:26.958581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.932 #5 NEW cov: 12455 ft: 13965 corp: 4/7b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CrossOver- 00:07:01.932 [2024-11-27 15:06:26.998576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.932 [2024-11-27 15:06:26.998607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.932 [2024-11-27 15:06:26.998664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.932 [2024-11-27 15:06:26.998678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.932 #6 NEW cov: 12455 ft: 13988 corp: 5/9b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ShuffleBytes- 00:07:01.932 [2024-11-27 15:06:27.058607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.932 [2024-11-27 15:06:27.058632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.932 #7 NEW cov: 12455 ft: 14060 corp: 6/10b lim: 5 exec/s: 0 rss: 72Mb L: 1/2 MS: 1 ShuffleBytes- 00:07:01.932 [2024-11-27 15:06:27.099185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.932 [2024-11-27 15:06:27.099210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.932 [2024-11-27 15:06:27.099267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.932 [2024-11-27 15:06:27.099282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.932 [2024-11-27 15:06:27.099338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.932 [2024-11-27 15:06:27.099351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.932 [2024-11-27 15:06:27.099407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.932 [2024-11-27 15:06:27.099421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.932 #8 NEW cov: 12455 ft: 14410 corp: 7/14b lim: 5 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 CopyPart- 00:07:01.932 [2024-11-27 15:06:27.159163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.932 [2024-11-27 15:06:27.159189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.932 [2024-11-27 15:06:27.159244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.932 [2024-11-27 15:06:27.159258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.932 [2024-11-27 15:06:27.159314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.932 [2024-11-27 15:06:27.159328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.932 #9 NEW cov: 12455 ft: 14595 corp: 8/17b lim: 5 exec/s: 0 rss: 72Mb L: 3/4 MS: 1 EraseBytes- 00:07:01.932 [2024-11-27 15:06:27.219215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.932 [2024-11-27 15:06:27.219240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.932 [2024-11-27 15:06:27.219296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.932 [2024-11-27 15:06:27.219309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.932 #10 NEW cov: 12455 ft: 14670 corp: 9/19b lim: 5 exec/s: 0 rss: 72Mb L: 2/4 MS: 1 CopyPart- 00:07:01.932 [2024-11-27 15:06:27.259375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.932 [2024-11-27 15:06:27.259400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.932 [2024-11-27 15:06:27.259457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.932 [2024-11-27 15:06:27.259472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.196 #11 NEW cov: 12455 ft: 14699 corp: 10/21b lim: 5 exec/s: 0 rss: 72Mb L: 2/4 MS: 1 ChangeByte- 00:07:02.196 [2024-11-27 15:06:27.319350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.196 [2024-11-27 15:06:27.319375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.196 #12 NEW cov: 12455 ft: 14790 corp: 11/22b lim: 5 exec/s: 0 rss: 72Mb L: 1/4 MS: 1 ChangeBinInt- 00:07:02.196 [2024-11-27 15:06:27.359460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.196 [2024-11-27 15:06:27.359485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.196 #13 NEW cov: 12455 ft: 14797 corp: 12/23b lim: 5 exec/s: 0 rss: 72Mb L: 1/4 MS: 1 EraseBytes- 00:07:02.196 [2024-11-27 15:06:27.399521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.196 [2024-11-27 15:06:27.399547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.196 #14 NEW cov: 12455 ft: 14851 corp: 13/24b lim: 5 exec/s: 0 rss: 72Mb L: 1/4 MS: 1 ChangeBinInt- 00:07:02.196 [2024-11-27 15:06:27.459751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.196 [2024-11-27 15:06:27.459776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.196 #15 NEW cov: 12455 ft: 14882 corp: 14/25b lim: 5 exec/s: 0 rss: 73Mb L: 1/4 MS: 1 ChangeBinInt- 00:07:02.196 [2024-11-27 15:06:27.519916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.196 [2024-11-27 15:06:27.519945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.454 #16 NEW cov: 12455 ft: 14999 corp: 15/26b lim: 5 exec/s: 0 rss: 73Mb L: 1/4 MS: 1 EraseBytes- 00:07:02.454 [2024-11-27 15:06:27.580224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.454 [2024-11-27 15:06:27.580249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.454 [2024-11-27 15:06:27.580305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.454 [2024-11-27 15:06:27.580319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.454 #17 NEW cov: 12455 ft: 15067 corp: 16/28b lim: 5 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 InsertByte- 00:07:02.454 [2024-11-27 15:06:27.640428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.454 [2024-11-27 15:06:27.640454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.454 [2024-11-27 15:06:27.640511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.454 [2024-11-27 15:06:27.640525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.454 #18 NEW cov: 12455 ft: 15092 corp: 17/30b lim: 5 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 InsertByte- 00:07:02.454 [2024-11-27 15:06:27.680706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.454 [2024-11-27 15:06:27.680732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.454 [2024-11-27 15:06:27.680788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.454 [2024-11-27 15:06:27.680802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.454 [2024-11-27 15:06:27.680855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.454 [2024-11-27 15:06:27.680869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.712 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:02.712 #19 NEW cov: 12478 ft: 15132 corp: 18/33b lim: 5 exec/s: 19 rss: 74Mb L: 3/4 MS: 1 InsertByte- 00:07:02.712 [2024-11-27 15:06:27.981426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.712 [2024-11-27 15:06:27.981458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.712 [2024-11-27 15:06:27.981515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.712 [2024-11-27 15:06:27.981530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.712 #20 NEW cov: 12478 ft: 15148 corp: 19/35b lim: 5 exec/s: 20 rss: 74Mb L: 2/4 MS: 1 InsertByte- 00:07:02.712 [2024-11-27 15:06:28.021526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.712 [2024-11-27 15:06:28.021555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.712 [2024-11-27 15:06:28.021610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.712 [2024-11-27 15:06:28.021624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.712 [2024-11-27 15:06:28.021678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.712 [2024-11-27 15:06:28.021691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.970 #21 NEW cov: 12478 ft: 15163 corp: 20/38b lim: 5 exec/s: 21 rss: 74Mb L: 3/4 MS: 1 ChangeByte- 00:07:02.970 [2024-11-27 15:06:28.081604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.970 [2024-11-27 15:06:28.081630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.970 [2024-11-27 15:06:28.081687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.970 [2024-11-27 15:06:28.081700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.970 #22 NEW cov: 12478 ft: 15223 corp: 21/40b lim: 5 exec/s: 22 rss: 74Mb L: 2/4 MS: 1 InsertByte- 00:07:02.970 [2024-11-27 15:06:28.142033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.970 [2024-11-27 15:06:28.142058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.970 [2024-11-27 15:06:28.142113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.970 [2024-11-27 15:06:28.142127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.970 [2024-11-27 15:06:28.142182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.970 [2024-11-27 15:06:28.142196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.970 [2024-11-27 15:06:28.142249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.970 [2024-11-27 15:06:28.142262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.970 #23 NEW cov: 12478 ft: 15252 corp: 22/44b lim: 5 exec/s: 23 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:07:02.970 [2024-11-27 15:06:28.181897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.970 [2024-11-27 15:06:28.181922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.970 [2024-11-27 15:06:28.181977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.970 [2024-11-27 15:06:28.181992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.970 #24 NEW cov: 12478 ft: 15264 corp: 23/46b lim: 5 exec/s: 24 rss: 74Mb L: 2/4 MS: 1 ChangeBinInt- 00:07:02.970 [2024-11-27 15:06:28.242016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.970 [2024-11-27 15:06:28.242044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.970 [2024-11-27 15:06:28.242100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.970 [2024-11-27 15:06:28.242113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.970 #25 NEW cov: 12478 ft: 15292 corp: 24/48b lim: 5 exec/s: 25 rss: 74Mb L: 2/4 MS: 1 InsertByte- 00:07:02.970 [2024-11-27 15:06:28.282292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.970 [2024-11-27 15:06:28.282317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.970 [2024-11-27 15:06:28.282374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.970 [2024-11-27 15:06:28.282388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.970 [2024-11-27 15:06:28.282444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.970 [2024-11-27 15:06:28.282458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.970 #26 NEW cov: 12478 ft: 15299 corp: 25/51b lim: 5 exec/s: 26 rss: 74Mb L: 3/4 MS: 1 InsertByte- 00:07:03.229 [2024-11-27 15:06:28.322388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.229 [2024-11-27 15:06:28.322412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.229 [2024-11-27 15:06:28.322468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.229 [2024-11-27 15:06:28.322482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.229 [2024-11-27 15:06:28.322537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.229 [2024-11-27 15:06:28.322550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.229 #27 NEW cov: 12478 ft: 15366 corp: 26/54b lim: 5 exec/s: 27 rss: 74Mb L: 3/4 MS: 1 ChangeByte- 00:07:03.229 [2024-11-27 15:06:28.362679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.229 [2024-11-27 15:06:28.362703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.229 [2024-11-27 15:06:28.362760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.229 [2024-11-27 15:06:28.362773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.229 [2024-11-27 15:06:28.362825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.229 [2024-11-27 15:06:28.362839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.229 [2024-11-27 15:06:28.362897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.229 [2024-11-27 15:06:28.362911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.229 #28 NEW cov: 12478 ft: 15369 corp: 27/58b lim: 5 exec/s: 28 rss: 74Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:07:03.229 [2024-11-27 15:06:28.402626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.229 [2024-11-27 15:06:28.402651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.229 [2024-11-27 15:06:28.402708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.229 [2024-11-27 15:06:28.402721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.229 [2024-11-27 15:06:28.402775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.229 [2024-11-27 15:06:28.402789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.229 #29 NEW cov: 12478 ft: 15413 corp: 28/61b lim: 5 exec/s: 29 rss: 75Mb L: 3/4 MS: 1 ShuffleBytes- 00:07:03.229 [2024-11-27 15:06:28.462491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.229 [2024-11-27 15:06:28.462516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.229 #30 NEW cov: 12478 ft: 15423 corp: 29/62b lim: 5 exec/s: 30 rss: 75Mb L: 1/4 MS: 1 ChangeByte- 00:07:03.229 [2024-11-27 15:06:28.502579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.229 [2024-11-27 15:06:28.502610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.229 #31 NEW cov: 12478 ft: 15441 corp: 30/63b lim: 5 exec/s: 31 rss: 75Mb L: 1/4 MS: 1 CopyPart- 00:07:03.230 [2024-11-27 15:06:28.542721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.230 [2024-11-27 15:06:28.542747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.230 #32 NEW cov: 12478 ft: 15446 corp: 31/64b lim: 5 exec/s: 32 rss: 75Mb L: 1/4 MS: 1 ShuffleBytes- 00:07:03.488 [2024-11-27 15:06:28.583064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.488 [2024-11-27 15:06:28.583089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.488 [2024-11-27 15:06:28.583145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.488 [2024-11-27 15:06:28.583158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.488 #33 NEW cov: 12478 ft: 15448 corp: 32/66b lim: 5 exec/s: 33 rss: 75Mb L: 2/4 MS: 1 CrossOver- 00:07:03.488 [2024-11-27 15:06:28.643176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.488 [2024-11-27 15:06:28.643202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.488 [2024-11-27 15:06:28.643260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.489 [2024-11-27 15:06:28.643275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.489 #34 NEW cov: 12478 ft: 15493 corp: 33/68b lim: 5 exec/s: 34 rss: 75Mb L: 2/4 MS: 1 EraseBytes- 00:07:03.489 [2024-11-27 15:06:28.703831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.489 [2024-11-27 15:06:28.703856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.489 [2024-11-27 15:06:28.703911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.489 [2024-11-27 15:06:28.703925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.489 [2024-11-27 15:06:28.703980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.489 [2024-11-27 15:06:28.703994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.489 [2024-11-27 15:06:28.704046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.489 [2024-11-27 15:06:28.704060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.489 [2024-11-27 15:06:28.704113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.489 [2024-11-27 15:06:28.704126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.489 #35 NEW cov: 12478 ft: 15568 corp: 34/73b lim: 5 exec/s: 35 rss: 75Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:03.489 [2024-11-27 15:06:28.763683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.489 [2024-11-27 15:06:28.763707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.489 [2024-11-27 15:06:28.763763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.489 [2024-11-27 15:06:28.763777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.489 [2024-11-27 15:06:28.763834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.489 [2024-11-27 15:06:28.763847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.489 #36 NEW cov: 12478 ft: 15613 corp: 35/76b lim: 5 exec/s: 18 rss: 75Mb L: 3/5 MS: 1 ChangeByte- 00:07:03.489 #36 DONE cov: 12478 ft: 15613 corp: 35/76b lim: 5 exec/s: 18 rss: 75Mb 00:07:03.489 Done 36 runs in 2 second(s) 00:07:03.747 15:06:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:07:03.747 15:06:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:03.747 15:06:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:03.747 15:06:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:07:03.747 15:06:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:07:03.747 15:06:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:03.747 15:06:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:03.747 15:06:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:03.747 15:06:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:07:03.747 15:06:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:03.747 15:06:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:03.747 15:06:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:07:03.747 15:06:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:07:03.747 15:06:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:03.747 15:06:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:07:03.748 15:06:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:03.748 15:06:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:03.748 15:06:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:03.748 15:06:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:07:03.748 [2024-11-27 15:06:28.953047] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:03.748 [2024-11-27 15:06:28.953129] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2369233 ] 00:07:04.006 [2024-11-27 15:06:29.136117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.006 [2024-11-27 15:06:29.169832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.006 [2024-11-27 15:06:29.229137] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.006 [2024-11-27 15:06:29.245493] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:07:04.006 INFO: Running with entropic power schedule (0xFF, 100). 00:07:04.006 INFO: Seed: 780663187 00:07:04.006 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:07:04.006 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:07:04.006 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:04.006 INFO: A corpus is not provided, starting from an empty corpus 00:07:04.006 [2024-11-27 15:06:29.321748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.006 [2024-11-27 15:06:29.321787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.006 #2 INITED cov: 12251 ft: 12252 corp: 1/1b exec/s: 0 rss: 72Mb 00:07:04.264 [2024-11-27 15:06:29.372880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.264 [2024-11-27 15:06:29.372910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.264 [2024-11-27 15:06:29.373032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.265 [2024-11-27 15:06:29.373048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.265 [2024-11-27 15:06:29.373167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.265 [2024-11-27 15:06:29.373184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.265 [2024-11-27 15:06:29.373310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.265 [2024-11-27 15:06:29.373329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.265 [2024-11-27 15:06:29.373455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.265 [2024-11-27 15:06:29.373470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.265 #3 NEW cov: 12364 ft: 13710 corp: 2/6b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:04.265 [2024-11-27 15:06:29.442080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.265 [2024-11-27 15:06:29.442110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.265 #4 NEW cov: 12370 ft: 13851 corp: 3/7b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:04.265 [2024-11-27 15:06:29.492220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.265 [2024-11-27 15:06:29.492250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.265 #5 NEW cov: 12455 ft: 14146 corp: 4/8b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeBinInt- 00:07:04.265 [2024-11-27 15:06:29.562669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.265 [2024-11-27 15:06:29.562703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.265 [2024-11-27 15:06:29.562839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.265 [2024-11-27 15:06:29.562858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.523 #6 NEW cov: 12455 ft: 14486 corp: 5/10b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:07:04.523 [2024-11-27 15:06:29.632606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.523 [2024-11-27 15:06:29.632634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.523 #7 NEW cov: 12455 ft: 14578 corp: 6/11b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 CrossOver- 00:07:04.523 [2024-11-27 15:06:29.682668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.523 [2024-11-27 15:06:29.682697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.523 #8 NEW cov: 12455 ft: 14642 corp: 7/12b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 CrossOver- 00:07:04.523 [2024-11-27 15:06:29.732790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.523 [2024-11-27 15:06:29.732819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.523 #9 NEW cov: 12455 ft: 14679 corp: 8/13b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 CrossOver- 00:07:04.523 [2024-11-27 15:06:29.803898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.523 [2024-11-27 15:06:29.803926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.523 [2024-11-27 15:06:29.804062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.524 [2024-11-27 15:06:29.804079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.524 [2024-11-27 15:06:29.804201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.524 [2024-11-27 15:06:29.804218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.524 [2024-11-27 15:06:29.804352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.524 [2024-11-27 15:06:29.804370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.524 #10 NEW cov: 12455 ft: 14788 corp: 9/17b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:07:04.783 [2024-11-27 15:06:29.874055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.783 [2024-11-27 15:06:29.874083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.783 [2024-11-27 15:06:29.874207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.783 [2024-11-27 15:06:29.874225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.783 [2024-11-27 15:06:29.874341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.783 [2024-11-27 15:06:29.874357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.783 [2024-11-27 15:06:29.874480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.783 [2024-11-27 15:06:29.874499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.783 #11 NEW cov: 12455 ft: 14823 corp: 10/21b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 ChangeByte- 00:07:04.783 [2024-11-27 15:06:29.943412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.783 [2024-11-27 15:06:29.943439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.783 #12 NEW cov: 12455 ft: 14912 corp: 11/22b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 CopyPart- 00:07:04.783 [2024-11-27 15:06:29.994211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.783 [2024-11-27 15:06:29.994238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.783 [2024-11-27 15:06:29.994359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.783 [2024-11-27 15:06:29.994380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.783 [2024-11-27 15:06:29.994506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.783 [2024-11-27 15:06:29.994524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.783 #13 NEW cov: 12455 ft: 15095 corp: 12/25b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 EraseBytes- 00:07:04.783 [2024-11-27 15:06:30.064441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.783 [2024-11-27 15:06:30.064468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.783 [2024-11-27 15:06:30.064587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.783 [2024-11-27 15:06:30.064607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.783 [2024-11-27 15:06:30.064728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.783 [2024-11-27 15:06:30.064745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.783 #14 NEW cov: 12455 ft: 15115 corp: 13/28b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 ChangeBinInt- 00:07:05.041 [2024-11-27 15:06:30.124302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.041 [2024-11-27 15:06:30.124330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.041 [2024-11-27 15:06:30.124455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.041 [2024-11-27 15:06:30.124473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.041 #15 NEW cov: 12455 ft: 15210 corp: 14/30b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 InsertByte- 00:07:05.041 [2024-11-27 15:06:30.174211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.041 [2024-11-27 15:06:30.174237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.301 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:05.301 #16 NEW cov: 12478 ft: 15255 corp: 15/31b lim: 5 exec/s: 16 rss: 74Mb L: 1/5 MS: 1 CopyPart- 00:07:05.301 [2024-11-27 15:06:30.526150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.301 [2024-11-27 15:06:30.526186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.301 [2024-11-27 15:06:30.526311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.301 [2024-11-27 15:06:30.526328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.301 [2024-11-27 15:06:30.526451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.301 [2024-11-27 15:06:30.526473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.301 [2024-11-27 15:06:30.526602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.301 [2024-11-27 15:06:30.526618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.301 [2024-11-27 15:06:30.526740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.301 [2024-11-27 15:06:30.526755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.301 #17 NEW cov: 12478 ft: 15294 corp: 16/36b lim: 5 exec/s: 17 rss: 74Mb L: 5/5 MS: 1 InsertByte- 00:07:05.301 [2024-11-27 15:06:30.596365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.301 [2024-11-27 15:06:30.596395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.301 [2024-11-27 15:06:30.596516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.301 [2024-11-27 15:06:30.596536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.301 [2024-11-27 15:06:30.596653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.301 [2024-11-27 15:06:30.596671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.301 [2024-11-27 15:06:30.596799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.301 [2024-11-27 15:06:30.596815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.301 [2024-11-27 15:06:30.596932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.301 [2024-11-27 15:06:30.596949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.301 #18 NEW cov: 12478 ft: 15341 corp: 17/41b lim: 5 exec/s: 18 rss: 74Mb L: 5/5 MS: 1 ChangeByte- 00:07:05.301 [2024-11-27 15:06:30.636545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.301 [2024-11-27 15:06:30.636572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.301 [2024-11-27 15:06:30.636694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.301 [2024-11-27 15:06:30.636711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.301 [2024-11-27 15:06:30.636828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.301 [2024-11-27 15:06:30.636845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.301 [2024-11-27 15:06:30.636964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.301 [2024-11-27 15:06:30.636982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.301 [2024-11-27 15:06:30.637110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.301 [2024-11-27 15:06:30.637126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.561 #19 NEW cov: 12478 ft: 15356 corp: 18/46b lim: 5 exec/s: 19 rss: 74Mb L: 5/5 MS: 1 InsertByte- 00:07:05.561 [2024-11-27 15:06:30.685803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.561 [2024-11-27 15:06:30.685831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.561 [2024-11-27 15:06:30.685957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.561 [2024-11-27 15:06:30.685974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.561 #20 NEW cov: 12478 ft: 15372 corp: 19/48b lim: 5 exec/s: 20 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:07:05.561 [2024-11-27 15:06:30.746866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.561 [2024-11-27 15:06:30.746894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.561 [2024-11-27 15:06:30.747014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.561 [2024-11-27 15:06:30.747031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.561 [2024-11-27 15:06:30.747147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.561 [2024-11-27 15:06:30.747164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.561 [2024-11-27 15:06:30.747287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.561 [2024-11-27 15:06:30.747303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.561 [2024-11-27 15:06:30.747415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.561 [2024-11-27 15:06:30.747432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.561 #21 NEW cov: 12478 ft: 15470 corp: 20/53b lim: 5 exec/s: 21 rss: 74Mb L: 5/5 MS: 1 InsertByte- 00:07:05.561 [2024-11-27 15:06:30.796661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.561 [2024-11-27 15:06:30.796689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.561 [2024-11-27 15:06:30.796814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.561 [2024-11-27 15:06:30.796831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.561 [2024-11-27 15:06:30.796947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.561 [2024-11-27 15:06:30.796967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.561 [2024-11-27 15:06:30.797100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.561 [2024-11-27 15:06:30.797120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.561 #22 NEW cov: 12478 ft: 15496 corp: 21/57b lim: 5 exec/s: 22 rss: 74Mb L: 4/5 MS: 1 ShuffleBytes- 00:07:05.561 [2024-11-27 15:06:30.845976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.561 [2024-11-27 15:06:30.846004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.561 #23 NEW cov: 12478 ft: 15517 corp: 22/58b lim: 5 exec/s: 23 rss: 74Mb L: 1/5 MS: 1 ChangeBit- 00:07:05.561 [2024-11-27 15:06:30.897241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.561 [2024-11-27 15:06:30.897271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.561 [2024-11-27 15:06:30.897389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.561 [2024-11-27 15:06:30.897407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.561 [2024-11-27 15:06:30.897527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.561 [2024-11-27 15:06:30.897544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.561 [2024-11-27 15:06:30.897664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.561 [2024-11-27 15:06:30.897681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.561 [2024-11-27 15:06:30.897808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.561 [2024-11-27 15:06:30.897825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.820 #24 NEW cov: 12478 ft: 15575 corp: 23/63b lim: 5 exec/s: 24 rss: 75Mb L: 5/5 MS: 1 InsertByte- 00:07:05.820 [2024-11-27 15:06:30.967160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.820 [2024-11-27 15:06:30.967190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.820 [2024-11-27 15:06:30.967313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.820 [2024-11-27 15:06:30.967330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.820 [2024-11-27 15:06:30.967447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.820 [2024-11-27 15:06:30.967463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.820 [2024-11-27 15:06:30.967578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.820 [2024-11-27 15:06:30.967605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.820 #25 NEW cov: 12478 ft: 15590 corp: 24/67b lim: 5 exec/s: 25 rss: 75Mb L: 4/5 MS: 1 EraseBytes- 00:07:05.820 [2024-11-27 15:06:31.037656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.820 [2024-11-27 15:06:31.037684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.820 [2024-11-27 15:06:31.037810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.820 [2024-11-27 15:06:31.037828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.820 [2024-11-27 15:06:31.037950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.820 [2024-11-27 15:06:31.037964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.820 [2024-11-27 15:06:31.038082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.820 [2024-11-27 15:06:31.038099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.820 [2024-11-27 15:06:31.038207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.820 [2024-11-27 15:06:31.038222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.820 #26 NEW cov: 12478 ft: 15612 corp: 25/72b lim: 5 exec/s: 26 rss: 75Mb L: 5/5 MS: 1 CrossOver- 00:07:05.820 [2024-11-27 15:06:31.107057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.820 [2024-11-27 15:06:31.107084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.820 [2024-11-27 15:06:31.107205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.820 [2024-11-27 15:06:31.107223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.820 #27 NEW cov: 12478 ft: 15628 corp: 26/74b lim: 5 exec/s: 27 rss: 75Mb L: 2/5 MS: 1 InsertByte- 00:07:05.820 [2024-11-27 15:06:31.157732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.820 [2024-11-27 15:06:31.157760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.820 [2024-11-27 15:06:31.157885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.820 [2024-11-27 15:06:31.157901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.820 [2024-11-27 15:06:31.158017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.820 [2024-11-27 15:06:31.158032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.820 [2024-11-27 15:06:31.158154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.820 [2024-11-27 15:06:31.158170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.080 #28 NEW cov: 12478 ft: 15645 corp: 27/78b lim: 5 exec/s: 28 rss: 75Mb L: 4/5 MS: 1 ChangeBit- 00:07:06.080 [2024-11-27 15:06:31.207363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.080 [2024-11-27 15:06:31.207391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.080 [2024-11-27 15:06:31.207514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.080 [2024-11-27 15:06:31.207531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.080 #29 NEW cov: 12478 ft: 15646 corp: 28/80b lim: 5 exec/s: 29 rss: 75Mb L: 2/5 MS: 1 InsertByte- 00:07:06.080 [2024-11-27 15:06:31.258255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.080 [2024-11-27 15:06:31.258280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.080 [2024-11-27 15:06:31.258400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.080 [2024-11-27 15:06:31.258418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.080 [2024-11-27 15:06:31.258537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.080 [2024-11-27 15:06:31.258554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.080 [2024-11-27 15:06:31.258671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.080 [2024-11-27 15:06:31.258688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.080 [2024-11-27 15:06:31.258803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.080 [2024-11-27 15:06:31.258820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.080 #30 NEW cov: 12478 ft: 15650 corp: 29/85b lim: 5 exec/s: 30 rss: 75Mb L: 5/5 MS: 1 CopyPart- 00:07:06.080 [2024-11-27 15:06:31.308495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.080 [2024-11-27 15:06:31.308522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.080 [2024-11-27 15:06:31.308643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.080 [2024-11-27 15:06:31.308661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.080 [2024-11-27 15:06:31.308772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.080 [2024-11-27 15:06:31.308790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.080 [2024-11-27 15:06:31.308914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.080 [2024-11-27 15:06:31.308933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:06.080 [2024-11-27 15:06:31.309055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:06.080 [2024-11-27 15:06:31.309070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:06.080 #31 NEW cov: 12478 ft: 15659 corp: 30/90b lim: 5 exec/s: 15 rss: 75Mb L: 5/5 MS: 1 InsertByte- 00:07:06.080 #31 DONE cov: 12478 ft: 15659 corp: 30/90b lim: 5 exec/s: 15 rss: 75Mb 00:07:06.080 Done 31 runs in 2 second(s) 00:07:06.339 15:06:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:07:06.339 15:06:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:06.340 15:06:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:06.340 15:06:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:07:06.340 15:06:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:07:06.340 15:06:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:06.340 15:06:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:06.340 15:06:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:06.340 15:06:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:07:06.340 15:06:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:06.340 15:06:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:06.340 15:06:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:07:06.340 15:06:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:07:06.340 15:06:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:06.340 15:06:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:07:06.340 15:06:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:06.340 15:06:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:06.340 15:06:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:06.340 15:06:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:07:06.340 [2024-11-27 15:06:31.484469] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:06.340 [2024-11-27 15:06:31.484522] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2369762 ] 00:07:06.340 [2024-11-27 15:06:31.666296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.599 [2024-11-27 15:06:31.700521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.599 [2024-11-27 15:06:31.759707] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.599 [2024-11-27 15:06:31.776017] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:07:06.599 INFO: Running with entropic power schedule (0xFF, 100). 00:07:06.599 INFO: Seed: 3311676120 00:07:06.599 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:07:06.599 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:07:06.599 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:06.599 INFO: A corpus is not provided, starting from an empty corpus 00:07:06.599 #2 INITED exec/s: 0 rss: 65Mb 00:07:06.599 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:06.599 This may also happen if the target rejected all inputs we tried so far 00:07:06.599 [2024-11-27 15:06:31.831587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.599 [2024-11-27 15:06:31.831621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.599 [2024-11-27 15:06:31.831698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.599 [2024-11-27 15:06:31.831713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.858 NEW_FUNC[1/716]: 0x448a88 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:07:06.858 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:06.858 #19 NEW cov: 12261 ft: 12272 corp: 2/24b lim: 40 exec/s: 0 rss: 73Mb L: 23/23 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:06.858 [2024-11-27 15:06:32.152459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.858 [2024-11-27 15:06:32.152490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.858 [2024-11-27 15:06:32.152567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.858 [2024-11-27 15:06:32.152581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.858 [2024-11-27 15:06:32.152646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:82ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.858 [2024-11-27 15:06:32.152660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.858 #20 NEW cov: 12387 ft: 13208 corp: 3/48b lim: 40 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 InsertByte- 00:07:07.118 [2024-11-27 15:06:32.212702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.118 [2024-11-27 15:06:32.212728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.118 [2024-11-27 15:06:32.212802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.118 [2024-11-27 15:06:32.212817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.118 [2024-11-27 15:06:32.212875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.118 [2024-11-27 15:06:32.212888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.118 [2024-11-27 15:06:32.212945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.118 [2024-11-27 15:06:32.212962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.118 #21 NEW cov: 12393 ft: 13819 corp: 4/84b lim: 40 exec/s: 0 rss: 74Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:07:07.118 [2024-11-27 15:06:32.252932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.118 [2024-11-27 15:06:32.252960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.118 [2024-11-27 15:06:32.253018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.118 [2024-11-27 15:06:32.253032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.119 [2024-11-27 15:06:32.253089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:28282828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-11-27 15:06:32.253103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.119 [2024-11-27 15:06:32.253160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-11-27 15:06:32.253173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.119 [2024-11-27 15:06:32.253229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-11-27 15:06:32.253242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.119 #22 NEW cov: 12478 ft: 14091 corp: 5/124b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:07.119 [2024-11-27 15:06:32.313013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-11-27 15:06:32.313039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.119 [2024-11-27 15:06:32.313112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-11-27 15:06:32.313126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.119 [2024-11-27 15:06:32.313185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-11-27 15:06:32.313198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.119 [2024-11-27 15:06:32.313258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-11-27 15:06:32.313272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.119 #23 NEW cov: 12478 ft: 14239 corp: 6/160b lim: 40 exec/s: 0 rss: 74Mb L: 36/40 MS: 1 ShuffleBytes- 00:07:07.119 [2024-11-27 15:06:32.353041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-11-27 15:06:32.353066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.119 [2024-11-27 15:06:32.353140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-11-27 15:06:32.353158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.119 [2024-11-27 15:06:32.353214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-11-27 15:06:32.353228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.119 [2024-11-27 15:06:32.353287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-11-27 15:06:32.353300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.119 #24 NEW cov: 12478 ft: 14345 corp: 7/196b lim: 40 exec/s: 0 rss: 74Mb L: 36/40 MS: 1 ChangeBinInt- 00:07:07.119 [2024-11-27 15:06:32.393048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-11-27 15:06:32.393073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.119 [2024-11-27 15:06:32.393129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff82ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-11-27 15:06:32.393143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.119 [2024-11-27 15:06:32.393199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:82ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-11-27 15:06:32.393211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.119 #25 NEW cov: 12478 ft: 14379 corp: 8/220b lim: 40 exec/s: 0 rss: 74Mb L: 24/40 MS: 1 CopyPart- 00:07:07.119 [2024-11-27 15:06:32.453372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-11-27 15:06:32.453398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.119 [2024-11-27 15:06:32.453458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-11-27 15:06:32.453472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.119 [2024-11-27 15:06:32.453532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-11-27 15:06:32.453545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.119 [2024-11-27 15:06:32.453603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.119 [2024-11-27 15:06:32.453617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.379 #26 NEW cov: 12478 ft: 14416 corp: 9/256b lim: 40 exec/s: 0 rss: 74Mb L: 36/40 MS: 1 ShuffleBytes- 00:07:07.379 [2024-11-27 15:06:32.493473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-11-27 15:06:32.493499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.379 [2024-11-27 15:06:32.493559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-11-27 15:06:32.493573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.379 [2024-11-27 15:06:32.493632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-11-27 15:06:32.493646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.379 [2024-11-27 15:06:32.493703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:fffffff5 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-11-27 15:06:32.493716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.379 #27 NEW cov: 12478 ft: 14450 corp: 10/292b lim: 40 exec/s: 0 rss: 74Mb L: 36/40 MS: 1 ChangeBinInt- 00:07:07.379 [2024-11-27 15:06:32.533181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-11-27 15:06:32.533207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.379 #35 NEW cov: 12478 ft: 14871 corp: 11/304b lim: 40 exec/s: 0 rss: 74Mb L: 12/40 MS: 3 CrossOver-CopyPart-InsertRepeatedBytes- 00:07:07.379 [2024-11-27 15:06:32.573816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-11-27 15:06:32.573842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.379 [2024-11-27 15:06:32.573900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:82ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-11-27 15:06:32.573914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.379 [2024-11-27 15:06:32.573972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-11-27 15:06:32.573986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.379 [2024-11-27 15:06:32.574043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-11-27 15:06:32.574057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.379 [2024-11-27 15:06:32.574114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-11-27 15:06:32.574128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.379 #36 NEW cov: 12478 ft: 14898 corp: 12/344b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 CrossOver- 00:07:07.379 [2024-11-27 15:06:32.613769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-11-27 15:06:32.613794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.379 [2024-11-27 15:06:32.613869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-11-27 15:06:32.613883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.379 [2024-11-27 15:06:32.613945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-11-27 15:06:32.613959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.379 [2024-11-27 15:06:32.614017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-11-27 15:06:32.614030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.379 #37 NEW cov: 12478 ft: 14904 corp: 13/380b lim: 40 exec/s: 0 rss: 74Mb L: 36/40 MS: 1 ShuffleBytes- 00:07:07.379 [2024-11-27 15:06:32.653528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260a0200 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-11-27 15:06:32.653553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.379 #40 NEW cov: 12478 ft: 14956 corp: 14/389b lim: 40 exec/s: 0 rss: 74Mb L: 9/40 MS: 3 CrossOver-CrossOver-CMP- DE: "\002\000"- 00:07:07.379 [2024-11-27 15:06:32.713724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:270a0200 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.379 [2024-11-27 15:06:32.713750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.639 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:07.639 #41 NEW cov: 12501 ft: 15033 corp: 15/398b lim: 40 exec/s: 0 rss: 74Mb L: 9/40 MS: 1 ChangeBit- 00:07:07.639 [2024-11-27 15:06:32.773982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260a0200 cdw11:ffff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-11-27 15:06:32.774007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.639 [2024-11-27 15:06:32.774084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-11-27 15:06:32.774098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.639 #42 NEW cov: 12501 ft: 15085 corp: 16/417b lim: 40 exec/s: 0 rss: 74Mb L: 19/40 MS: 1 CrossOver- 00:07:07.639 [2024-11-27 15:06:32.814459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-11-27 15:06:32.814484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.639 [2024-11-27 15:06:32.814560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-11-27 15:06:32.814574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.639 [2024-11-27 15:06:32.814637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:28282828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-11-27 15:06:32.814651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.639 [2024-11-27 15:06:32.814721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-11-27 15:06:32.814734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.639 [2024-11-27 15:06:32.814798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-11-27 15:06:32.814812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.639 #43 NEW cov: 12501 ft: 15145 corp: 17/457b lim: 40 exec/s: 43 rss: 74Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:07.639 [2024-11-27 15:06:32.874179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26260aff cdw11:0affffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-11-27 15:06:32.874205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.639 #44 NEW cov: 12501 ft: 15171 corp: 18/465b lim: 40 exec/s: 44 rss: 74Mb L: 8/40 MS: 1 CrossOver- 00:07:07.639 [2024-11-27 15:06:32.914219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:270a0300 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-11-27 15:06:32.914246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.639 #45 NEW cov: 12501 ft: 15195 corp: 19/474b lim: 40 exec/s: 45 rss: 74Mb L: 9/40 MS: 1 ChangeBit- 00:07:07.639 [2024-11-27 15:06:32.974741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260a0000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-11-27 15:06:32.974766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.639 [2024-11-27 15:06:32.974825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-11-27 15:06:32.974839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.639 [2024-11-27 15:06:32.974898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.639 [2024-11-27 15:06:32.974912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.898 #46 NEW cov: 12501 ft: 15238 corp: 20/500b lim: 40 exec/s: 46 rss: 74Mb L: 26/40 MS: 1 InsertRepeatedBytes- 00:07:07.898 [2024-11-27 15:06:33.015056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.898 [2024-11-27 15:06:33.015081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.898 [2024-11-27 15:06:33.015157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:82ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.898 [2024-11-27 15:06:33.015171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.898 [2024-11-27 15:06:33.015228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.898 [2024-11-27 15:06:33.015242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.898 [2024-11-27 15:06:33.015302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.898 [2024-11-27 15:06:33.015315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.898 [2024-11-27 15:06:33.015376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.898 [2024-11-27 15:06:33.015393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:07.898 #47 NEW cov: 12501 ft: 15258 corp: 21/540b lim: 40 exec/s: 47 rss: 74Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:07.898 [2024-11-27 15:06:33.074741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:270a0200 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.898 [2024-11-27 15:06:33.074766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.898 #48 NEW cov: 12501 ft: 15285 corp: 22/549b lim: 40 exec/s: 48 rss: 75Mb L: 9/40 MS: 1 CopyPart- 00:07:07.898 [2024-11-27 15:06:33.114812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:270a3000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.898 [2024-11-27 15:06:33.114838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.898 #49 NEW cov: 12501 ft: 15293 corp: 23/558b lim: 40 exec/s: 49 rss: 75Mb L: 9/40 MS: 1 ChangeByte- 00:07:07.898 [2024-11-27 15:06:33.175376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.898 [2024-11-27 15:06:33.175401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.899 [2024-11-27 15:06:33.175460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-11-27 15:06:33.175473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.899 [2024-11-27 15:06:33.175533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-11-27 15:06:33.175546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.899 [2024-11-27 15:06:33.175607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-11-27 15:06:33.175620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.899 #50 NEW cov: 12501 ft: 15298 corp: 24/591b lim: 40 exec/s: 50 rss: 75Mb L: 33/40 MS: 1 EraseBytes- 00:07:07.899 [2024-11-27 15:06:33.235553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-11-27 15:06:33.235578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.899 [2024-11-27 15:06:33.235638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffff01 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-11-27 15:06:33.235652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.899 [2024-11-27 15:06:33.235712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:000002ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-11-27 15:06:33.235726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.899 [2024-11-27 15:06:33.235784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.899 [2024-11-27 15:06:33.235798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.159 [2024-11-27 15:06:33.275665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-11-27 15:06:33.275690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.159 [2024-11-27 15:06:33.275750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffff01 cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-11-27 15:06:33.275764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.159 [2024-11-27 15:06:33.275821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:7fbf5010 cdw11:88edffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-11-27 15:06:33.275835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.159 [2024-11-27 15:06:33.275892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-11-27 15:06:33.275905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.159 #52 NEW cov: 12501 ft: 15301 corp: 25/627b lim: 40 exec/s: 52 rss: 75Mb L: 36/40 MS: 2 CMP-CMP- DE: "\001\000\000\000\000\000\000\002"-"\001\000\177\277P\020\210\355"- 00:07:08.159 [2024-11-27 15:06:33.315746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-11-27 15:06:33.315770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.159 [2024-11-27 15:06:33.315847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-11-27 15:06:33.315861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.159 [2024-11-27 15:06:33.315918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:8cffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-11-27 15:06:33.315931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.159 [2024-11-27 15:06:33.315989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-11-27 15:06:33.316003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.159 #53 NEW cov: 12501 ft: 15318 corp: 26/663b lim: 40 exec/s: 53 rss: 75Mb L: 36/40 MS: 1 ChangeByte- 00:07:08.159 [2024-11-27 15:06:33.375720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260a0200 cdw11:ffff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-11-27 15:06:33.375745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.159 [2024-11-27 15:06:33.375818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000021 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-11-27 15:06:33.375831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.159 #54 NEW cov: 12501 ft: 15330 corp: 27/682b lim: 40 exec/s: 54 rss: 75Mb L: 19/40 MS: 1 ChangeByte- 00:07:08.159 [2024-11-27 15:06:33.436110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-11-27 15:06:33.436137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.159 [2024-11-27 15:06:33.436214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-11-27 15:06:33.436228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.159 [2024-11-27 15:06:33.436288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:8cffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-11-27 15:06:33.436301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.159 [2024-11-27 15:06:33.436357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-11-27 15:06:33.436371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.159 #55 NEW cov: 12501 ft: 15340 corp: 28/718b lim: 40 exec/s: 55 rss: 75Mb L: 36/40 MS: 1 ShuffleBytes- 00:07:08.159 [2024-11-27 15:06:33.496313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-11-27 15:06:33.496338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.159 [2024-11-27 15:06:33.496399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff01ff01 cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-11-27 15:06:33.496413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.159 [2024-11-27 15:06:33.496472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:7fbf5010 cdw11:88edffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-11-27 15:06:33.496486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.159 [2024-11-27 15:06:33.496546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.159 [2024-11-27 15:06:33.496559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.419 #56 NEW cov: 12501 ft: 15355 corp: 29/754b lim: 40 exec/s: 56 rss: 75Mb L: 36/40 MS: 1 ChangeBinInt- 00:07:08.419 [2024-11-27 15:06:33.556572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.556600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.419 [2024-11-27 15:06:33.556687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:82ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.556701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.419 [2024-11-27 15:06:33.556760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.556773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.419 [2024-11-27 15:06:33.556832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.556845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.419 [2024-11-27 15:06:33.556908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:ffffdfff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.556921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.419 #57 NEW cov: 12501 ft: 15360 corp: 30/794b lim: 40 exec/s: 57 rss: 75Mb L: 40/40 MS: 1 ChangeBit- 00:07:08.419 [2024-11-27 15:06:33.596448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.596474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.419 [2024-11-27 15:06:33.596534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.596548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.419 [2024-11-27 15:06:33.596607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:82ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.596620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.419 #58 NEW cov: 12501 ft: 15367 corp: 31/818b lim: 40 exec/s: 58 rss: 75Mb L: 24/40 MS: 1 ShuffleBytes- 00:07:08.419 [2024-11-27 15:06:33.636802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.636826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.419 [2024-11-27 15:06:33.636904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.636919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.419 [2024-11-27 15:06:33.636981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:28282828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.636994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.419 [2024-11-27 15:06:33.637054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.637067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.419 [2024-11-27 15:06:33.637128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.637142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:08.419 #59 NEW cov: 12501 ft: 15373 corp: 32/858b lim: 40 exec/s: 59 rss: 75Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:08.419 [2024-11-27 15:06:33.676843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.676867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.419 [2024-11-27 15:06:33.676943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.676960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.419 [2024-11-27 15:06:33.677020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.677034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.419 [2024-11-27 15:06:33.677092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.677105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.419 #60 NEW cov: 12501 ft: 15405 corp: 33/897b lim: 40 exec/s: 60 rss: 75Mb L: 39/40 MS: 1 InsertRepeatedBytes- 00:07:08.419 [2024-11-27 15:06:33.716724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.716749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.419 [2024-11-27 15:06:33.716825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.716840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.419 [2024-11-27 15:06:33.716902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:82ff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.716916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.419 #61 NEW cov: 12501 ft: 15415 corp: 34/925b lim: 40 exec/s: 61 rss: 75Mb L: 28/40 MS: 1 InsertRepeatedBytes- 00:07:08.419 [2024-11-27 15:06:33.757046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:260affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.757072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.419 [2024-11-27 15:06:33.757135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:fffff6ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.757149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.419 [2024-11-27 15:06:33.757209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.757223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.419 [2024-11-27 15:06:33.757281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:fffffff5 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.419 [2024-11-27 15:06:33.757294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.678 #62 NEW cov: 12501 ft: 15472 corp: 35/961b lim: 40 exec/s: 62 rss: 75Mb L: 36/40 MS: 1 ChangeBinInt- 00:07:08.678 [2024-11-27 15:06:33.816856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:270a03ff cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.678 [2024-11-27 15:06:33.816883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.678 #63 NEW cov: 12501 ft: 15535 corp: 36/970b lim: 40 exec/s: 31 rss: 75Mb L: 9/40 MS: 1 ShuffleBytes- 00:07:08.678 #63 DONE cov: 12501 ft: 15535 corp: 36/970b lim: 40 exec/s: 31 rss: 75Mb 00:07:08.678 ###### Recommended dictionary. ###### 00:07:08.678 "\002\000" # Uses: 0 00:07:08.678 "\001\000\000\000\000\000\000\002" # Uses: 0 00:07:08.678 "\001\000\177\277P\020\210\355" # Uses: 0 00:07:08.678 ###### End of recommended dictionary. ###### 00:07:08.678 Done 63 runs in 2 second(s) 00:07:08.678 15:06:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:07:08.678 15:06:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:08.678 15:06:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:08.678 15:06:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:07:08.678 15:06:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:07:08.679 15:06:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:08.679 15:06:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:08.679 15:06:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:08.679 15:06:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:07:08.679 15:06:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:08.679 15:06:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:08.679 15:06:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:07:08.679 15:06:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:07:08.679 15:06:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:08.679 15:06:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:07:08.679 15:06:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:08.679 15:06:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:08.679 15:06:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:08.679 15:06:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:07:08.679 [2024-11-27 15:06:33.990964] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:08.679 [2024-11-27 15:06:33.991052] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2370144 ] 00:07:08.938 [2024-11-27 15:06:34.187127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.938 [2024-11-27 15:06:34.220894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.196 [2024-11-27 15:06:34.280711] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.196 [2024-11-27 15:06:34.297061] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:07:09.196 INFO: Running with entropic power schedule (0xFF, 100). 00:07:09.196 INFO: Seed: 1536712749 00:07:09.196 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:07:09.196 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:07:09.196 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:09.196 INFO: A corpus is not provided, starting from an empty corpus 00:07:09.196 #2 INITED exec/s: 0 rss: 65Mb 00:07:09.196 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:09.196 This may also happen if the target rejected all inputs we tried so far 00:07:09.196 [2024-11-27 15:06:34.342394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.196 [2024-11-27 15:06:34.342427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.455 NEW_FUNC[1/717]: 0x44a7f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:07:09.455 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:09.455 #3 NEW cov: 12286 ft: 12244 corp: 2/11b lim: 40 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:09.455 [2024-11-27 15:06:34.673579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ab4b4b4 cdw11:b4b4b4b4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.455 [2024-11-27 15:06:34.673624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.455 [2024-11-27 15:06:34.673690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b4b4b4b4 cdw11:b4b4b4b4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.455 [2024-11-27 15:06:34.673708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.456 [2024-11-27 15:06:34.673772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:b4b4b4b4 cdw11:b4b4b4b4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.456 [2024-11-27 15:06:34.673791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.456 #4 NEW cov: 12399 ft: 13579 corp: 3/37b lim: 40 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:07:09.456 [2024-11-27 15:06:34.713222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffff2e cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.456 [2024-11-27 15:06:34.713247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.456 #5 NEW cov: 12405 ft: 13861 corp: 4/48b lim: 40 exec/s: 0 rss: 73Mb L: 11/26 MS: 1 InsertByte- 00:07:09.456 [2024-11-27 15:06:34.773402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffff2e cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.456 [2024-11-27 15:06:34.773427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.714 #6 NEW cov: 12490 ft: 14100 corp: 5/59b lim: 40 exec/s: 0 rss: 73Mb L: 11/26 MS: 1 ChangeBinInt- 00:07:09.714 [2024-11-27 15:06:34.833568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.714 [2024-11-27 15:06:34.833593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.714 #7 NEW cov: 12490 ft: 14191 corp: 6/69b lim: 40 exec/s: 0 rss: 73Mb L: 10/26 MS: 1 ChangeBit- 00:07:09.714 [2024-11-27 15:06:34.873674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.714 [2024-11-27 15:06:34.873699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.714 #8 NEW cov: 12490 ft: 14247 corp: 7/79b lim: 40 exec/s: 0 rss: 73Mb L: 10/26 MS: 1 ShuffleBytes- 00:07:09.714 [2024-11-27 15:06:34.933841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffff2e cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.714 [2024-11-27 15:06:34.933865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.714 #9 NEW cov: 12490 ft: 14309 corp: 8/90b lim: 40 exec/s: 0 rss: 73Mb L: 11/26 MS: 1 ChangeBinInt- 00:07:09.714 [2024-11-27 15:06:34.974239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:05050505 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.714 [2024-11-27 15:06:34.974267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.714 [2024-11-27 15:06:34.974322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:05050505 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.714 [2024-11-27 15:06:34.974335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.714 [2024-11-27 15:06:34.974390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:05050505 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.714 [2024-11-27 15:06:34.974403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.714 #14 NEW cov: 12490 ft: 14351 corp: 9/115b lim: 40 exec/s: 0 rss: 73Mb L: 25/26 MS: 5 ChangeByte-ShuffleBytes-CopyPart-ChangeByte-InsertRepeatedBytes- 00:07:09.714 [2024-11-27 15:06:35.014350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2a0a0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.714 [2024-11-27 15:06:35.014375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.714 [2024-11-27 15:06:35.014430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.714 [2024-11-27 15:06:35.014444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.714 [2024-11-27 15:06:35.014497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.714 [2024-11-27 15:06:35.014510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.714 #24 NEW cov: 12490 ft: 14368 corp: 10/139b lim: 40 exec/s: 0 rss: 73Mb L: 24/26 MS: 5 CrossOver-CrossOver-InsertByte-EraseBytes-InsertRepeatedBytes- 00:07:09.973 [2024-11-27 15:06:35.054186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffff2e cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.973 [2024-11-27 15:06:35.054212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.974 #25 NEW cov: 12490 ft: 14400 corp: 11/149b lim: 40 exec/s: 0 rss: 73Mb L: 10/26 MS: 1 EraseBytes- 00:07:09.974 [2024-11-27 15:06:35.114520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffff2e cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.974 [2024-11-27 15:06:35.114545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.974 [2024-11-27 15:06:35.114603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000aff cdw11:ffff2eff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.974 [2024-11-27 15:06:35.114617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.974 #26 NEW cov: 12490 ft: 14645 corp: 12/171b lim: 40 exec/s: 0 rss: 73Mb L: 22/26 MS: 1 CopyPart- 00:07:09.974 [2024-11-27 15:06:35.154619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0ab4b4b4 cdw11:b4b4b4b4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.974 [2024-11-27 15:06:35.154644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.974 [2024-11-27 15:06:35.154700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:b4b4b4b4 cdw11:b4b4b4b4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.974 [2024-11-27 15:06:35.154715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.974 #27 NEW cov: 12490 ft: 14739 corp: 13/193b lim: 40 exec/s: 0 rss: 74Mb L: 22/26 MS: 1 EraseBytes- 00:07:09.974 [2024-11-27 15:06:35.214818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff02002e cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.974 [2024-11-27 15:06:35.214843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.974 [2024-11-27 15:06:35.214900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000aff cdw11:ffff2eff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.974 [2024-11-27 15:06:35.214914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.974 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:09.974 #28 NEW cov: 12513 ft: 14819 corp: 14/215b lim: 40 exec/s: 0 rss: 74Mb L: 22/26 MS: 1 ChangeBinInt- 00:07:09.974 [2024-11-27 15:06:35.275138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2a0a00ff cdw11:ff010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.974 [2024-11-27 15:06:35.275162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.974 [2024-11-27 15:06:35.275219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:000affff cdw11:ff2effff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.974 [2024-11-27 15:06:35.275232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.974 [2024-11-27 15:06:35.275287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:01000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.974 [2024-11-27 15:06:35.275301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.233 #29 NEW cov: 12513 ft: 14838 corp: 15/239b lim: 40 exec/s: 0 rss: 74Mb L: 24/26 MS: 1 CrossOver- 00:07:10.233 [2024-11-27 15:06:35.335118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff02ffff cdw11:2e000100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.233 [2024-11-27 15:06:35.335143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.233 [2024-11-27 15:06:35.335200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000aff cdw11:ffff2eff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.233 [2024-11-27 15:06:35.335213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.233 #30 NEW cov: 12513 ft: 14876 corp: 16/261b lim: 40 exec/s: 30 rss: 74Mb L: 22/26 MS: 1 ShuffleBytes- 00:07:10.233 [2024-11-27 15:06:35.395097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffff01 cdw11:000003ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.233 [2024-11-27 15:06:35.395124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.233 #31 NEW cov: 12513 ft: 14893 corp: 17/271b lim: 40 exec/s: 31 rss: 74Mb L: 10/26 MS: 1 ChangeBinInt- 00:07:10.233 [2024-11-27 15:06:35.455457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffff2e cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.233 [2024-11-27 15:06:35.455482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.233 [2024-11-27 15:06:35.455540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000a28 cdw11:ffff2eff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.233 [2024-11-27 15:06:35.455557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.233 #37 NEW cov: 12513 ft: 14911 corp: 18/293b lim: 40 exec/s: 37 rss: 74Mb L: 22/26 MS: 1 ChangeByte- 00:07:10.233 [2024-11-27 15:06:35.495375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffb3 cdw11:2effff01 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.233 [2024-11-27 15:06:35.495401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.233 #38 NEW cov: 12513 ft: 14962 corp: 19/305b lim: 40 exec/s: 38 rss: 74Mb L: 12/26 MS: 1 InsertByte- 00:07:10.233 [2024-11-27 15:06:35.535697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff02002e cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.234 [2024-11-27 15:06:35.535722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.234 [2024-11-27 15:06:35.535781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00160aff cdw11:ffff2eff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.234 [2024-11-27 15:06:35.535794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.234 #39 NEW cov: 12513 ft: 14989 corp: 20/327b lim: 40 exec/s: 39 rss: 74Mb L: 22/26 MS: 1 ChangeBinInt- 00:07:10.493 [2024-11-27 15:06:35.575628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffff2e cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.493 [2024-11-27 15:06:35.575654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.493 #40 NEW cov: 12513 ft: 15077 corp: 21/342b lim: 40 exec/s: 40 rss: 74Mb L: 15/26 MS: 1 CopyPart- 00:07:10.493 [2024-11-27 15:06:35.635805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffff2e cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.493 [2024-11-27 15:06:35.635831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.493 #41 NEW cov: 12513 ft: 15093 corp: 22/357b lim: 40 exec/s: 41 rss: 74Mb L: 15/26 MS: 1 ChangeBinInt- 00:07:10.493 [2024-11-27 15:06:35.696170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff02002e cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.493 [2024-11-27 15:06:35.696195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.493 [2024-11-27 15:06:35.696249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000a01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.493 [2024-11-27 15:06:35.696263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.493 #42 NEW cov: 12513 ft: 15111 corp: 23/379b lim: 40 exec/s: 42 rss: 74Mb L: 22/26 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:10.493 [2024-11-27 15:06:35.736066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff02160a cdw11:ffffff2e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.493 [2024-11-27 15:06:35.736091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.493 #43 NEW cov: 12513 ft: 15117 corp: 24/394b lim: 40 exec/s: 43 rss: 75Mb L: 15/26 MS: 1 EraseBytes- 00:07:10.493 [2024-11-27 15:06:35.796211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:df02160a cdw11:ffffff2e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.493 [2024-11-27 15:06:35.796236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.752 #44 NEW cov: 12513 ft: 15126 corp: 25/409b lim: 40 exec/s: 44 rss: 75Mb L: 15/26 MS: 1 ChangeBit- 00:07:10.752 [2024-11-27 15:06:35.856513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffff01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.752 [2024-11-27 15:06:35.856538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.752 [2024-11-27 15:06:35.856592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0000002e cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.752 [2024-11-27 15:06:35.856613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.752 #45 NEW cov: 12513 ft: 15138 corp: 26/432b lim: 40 exec/s: 45 rss: 75Mb L: 23/26 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:10.752 [2024-11-27 15:06:35.896786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff02160a cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.752 [2024-11-27 15:06:35.896811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.752 [2024-11-27 15:06:35.896864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.752 [2024-11-27 15:06:35.896878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.752 [2024-11-27 15:06:35.896931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:2effff01 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.752 [2024-11-27 15:06:35.896945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.752 #46 NEW cov: 12513 ft: 15143 corp: 27/456b lim: 40 exec/s: 46 rss: 75Mb L: 24/26 MS: 1 CrossOver- 00:07:10.752 [2024-11-27 15:06:35.936727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff02002e cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.752 [2024-11-27 15:06:35.936751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.752 [2024-11-27 15:06:35.936805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000a01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.752 [2024-11-27 15:06:35.936819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.752 #47 NEW cov: 12513 ft: 15152 corp: 28/478b lim: 40 exec/s: 47 rss: 75Mb L: 22/26 MS: 1 ChangeByte- 00:07:10.752 [2024-11-27 15:06:35.996753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffff2e cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.752 [2024-11-27 15:06:35.996777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.752 #48 NEW cov: 12513 ft: 15174 corp: 29/490b lim: 40 exec/s: 48 rss: 75Mb L: 12/26 MS: 1 InsertByte- 00:07:10.752 [2024-11-27 15:06:36.036877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffff2e cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.752 [2024-11-27 15:06:36.036901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.752 #49 NEW cov: 12513 ft: 15175 corp: 30/502b lim: 40 exec/s: 49 rss: 75Mb L: 12/26 MS: 1 ChangeBit- 00:07:11.012 [2024-11-27 15:06:36.097495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff02160a cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.012 [2024-11-27 15:06:36.097520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.012 [2024-11-27 15:06:36.097577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.012 [2024-11-27 15:06:36.097594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.012 [2024-11-27 15:06:36.097652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:2eff0192 cdw11:759f21c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.012 [2024-11-27 15:06:36.097665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.012 [2024-11-27 15:06:36.097719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:2a02ff01 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.012 [2024-11-27 15:06:36.097733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:11.012 #50 NEW cov: 12513 ft: 15510 corp: 31/534b lim: 40 exec/s: 50 rss: 75Mb L: 32/32 MS: 1 CMP- DE: "\001\222u\237!\301*\002"- 00:07:11.012 [2024-11-27 15:06:36.157341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.012 [2024-11-27 15:06:36.157365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.012 [2024-11-27 15:06:36.157421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:2effffff cdw11:ffff03ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.012 [2024-11-27 15:06:36.157435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.012 #51 NEW cov: 12513 ft: 15523 corp: 32/550b lim: 40 exec/s: 51 rss: 75Mb L: 16/32 MS: 1 CrossOver- 00:07:11.012 [2024-11-27 15:06:36.217373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff2a0192 cdw11:759f21c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.012 [2024-11-27 15:06:36.217397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.012 #53 NEW cov: 12513 ft: 15528 corp: 33/560b lim: 40 exec/s: 53 rss: 75Mb L: 10/32 MS: 2 CrossOver-PersAutoDict- DE: "\001\222u\237!\301*\002"- 00:07:11.012 [2024-11-27 15:06:36.277536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffff2e cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.012 [2024-11-27 15:06:36.277561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.012 #59 NEW cov: 12513 ft: 15543 corp: 34/571b lim: 40 exec/s: 59 rss: 75Mb L: 11/32 MS: 1 CrossOver- 00:07:11.012 [2024-11-27 15:06:36.317682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:fe2a0192 cdw11:759f21c1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.012 [2024-11-27 15:06:36.317706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.272 [2024-11-27 15:06:36.377831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:fe2a0192 cdw11:0108759f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.272 [2024-11-27 15:06:36.377855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.272 #61 NEW cov: 12513 ft: 15594 corp: 35/583b lim: 40 exec/s: 30 rss: 75Mb L: 12/32 MS: 2 ChangeBit-CMP- DE: "\001\010"- 00:07:11.272 #61 DONE cov: 12513 ft: 15594 corp: 35/583b lim: 40 exec/s: 30 rss: 75Mb 00:07:11.272 ###### Recommended dictionary. ###### 00:07:11.272 "\001\000\000\000\000\000\000\000" # Uses: 1 00:07:11.272 "\001\222u\237!\301*\002" # Uses: 1 00:07:11.272 "\001\010" # Uses: 0 00:07:11.272 ###### End of recommended dictionary. ###### 00:07:11.272 Done 61 runs in 2 second(s) 00:07:11.272 15:06:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:07:11.272 15:06:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:11.272 15:06:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:11.272 15:06:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:07:11.272 15:06:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:07:11.272 15:06:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:11.272 15:06:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:11.272 15:06:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:11.272 15:06:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:07:11.272 15:06:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:11.272 15:06:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:11.272 15:06:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:07:11.272 15:06:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:07:11.272 15:06:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:11.272 15:06:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:07:11.272 15:06:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:11.272 15:06:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:11.272 15:06:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:11.272 15:06:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:07:11.272 [2024-11-27 15:06:36.547136] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:11.272 [2024-11-27 15:06:36.547207] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2370585 ] 00:07:11.532 [2024-11-27 15:06:36.732052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.532 [2024-11-27 15:06:36.765714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.532 [2024-11-27 15:06:36.825125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.532 [2024-11-27 15:06:36.841473] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:07:11.532 INFO: Running with entropic power schedule (0xFF, 100). 00:07:11.532 INFO: Seed: 4081702077 00:07:11.835 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:07:11.835 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:07:11.835 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:11.835 INFO: A corpus is not provided, starting from an empty corpus 00:07:11.835 #2 INITED exec/s: 0 rss: 65Mb 00:07:11.835 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:11.835 This may also happen if the target rejected all inputs we tried so far 00:07:11.835 [2024-11-27 15:06:36.896942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.835 [2024-11-27 15:06:36.896972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.096 NEW_FUNC[1/715]: 0x44c568 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:12.096 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:12.096 #6 NEW cov: 12273 ft: 12271 corp: 2/12b lim: 40 exec/s: 0 rss: 73Mb L: 11/11 MS: 4 ShuffleBytes-CopyPart-CrossOver-InsertRepeatedBytes- 00:07:12.096 [2024-11-27 15:06:37.237752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.096 [2024-11-27 15:06:37.237784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.096 NEW_FUNC[1/2]: 0x10838f8 in _sock_flush /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1347 00:07:12.096 NEW_FUNC[2/2]: 0x14ee058 in nvmf_transport_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:734 00:07:12.096 #12 NEW cov: 12397 ft: 12910 corp: 3/23b lim: 40 exec/s: 0 rss: 73Mb L: 11/11 MS: 1 CrossOver- 00:07:12.096 [2024-11-27 15:06:37.298063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7171 cdw11:7171710a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.096 [2024-11-27 15:06:37.298089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.096 [2024-11-27 15:06:37.298145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0a717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.096 [2024-11-27 15:06:37.298159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.096 #13 NEW cov: 12403 ft: 13783 corp: 4/45b lim: 40 exec/s: 0 rss: 73Mb L: 22/22 MS: 1 CrossOver- 00:07:12.096 [2024-11-27 15:06:37.358182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7171 cdw11:7171710a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.096 [2024-11-27 15:06:37.358208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.096 [2024-11-27 15:06:37.358267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0a717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.096 [2024-11-27 15:06:37.358282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.096 #14 NEW cov: 12488 ft: 14037 corp: 5/67b lim: 40 exec/s: 0 rss: 73Mb L: 22/22 MS: 1 ShuffleBytes- 00:07:12.096 [2024-11-27 15:06:37.418364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7171 cdw11:7171710a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.096 [2024-11-27 15:06:37.418389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.096 [2024-11-27 15:06:37.418448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0a717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.096 [2024-11-27 15:06:37.418462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.355 #15 NEW cov: 12488 ft: 14172 corp: 6/89b lim: 40 exec/s: 0 rss: 73Mb L: 22/22 MS: 1 ChangeByte- 00:07:12.355 [2024-11-27 15:06:37.458605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:010a7171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.355 [2024-11-27 15:06:37.458630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.355 [2024-11-27 15:06:37.458691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:7171710a cdw11:0a717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.355 [2024-11-27 15:06:37.458705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.355 [2024-11-27 15:06:37.458773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.356 [2024-11-27 15:06:37.458789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.356 #16 NEW cov: 12488 ft: 14471 corp: 7/115b lim: 40 exec/s: 0 rss: 74Mb L: 26/26 MS: 1 CMP- DE: "\001\000\000\001"- 00:07:12.356 [2024-11-27 15:06:37.518977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:010a7171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.356 [2024-11-27 15:06:37.519001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.356 [2024-11-27 15:06:37.519058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:7171710a cdw11:0a717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.356 [2024-11-27 15:06:37.519072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.356 [2024-11-27 15:06:37.519128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.356 [2024-11-27 15:06:37.519142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.356 [2024-11-27 15:06:37.519198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:71ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.356 [2024-11-27 15:06:37.519212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.356 #17 NEW cov: 12488 ft: 14886 corp: 8/153b lim: 40 exec/s: 0 rss: 74Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:07:12.356 [2024-11-27 15:06:37.579290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:010a7171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.356 [2024-11-27 15:06:37.579314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.356 [2024-11-27 15:06:37.579373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:7171710a cdw11:0a717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.356 [2024-11-27 15:06:37.579387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.356 [2024-11-27 15:06:37.579442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.356 [2024-11-27 15:06:37.579455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.356 [2024-11-27 15:06:37.579510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0a717171 cdw11:71710a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.356 [2024-11-27 15:06:37.579523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.356 [2024-11-27 15:06:37.579578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.356 [2024-11-27 15:06:37.579591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:12.356 #18 NEW cov: 12488 ft: 15016 corp: 9/193b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 CopyPart- 00:07:12.356 [2024-11-27 15:06:37.618732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.356 [2024-11-27 15:06:37.618756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.356 #19 NEW cov: 12488 ft: 15127 corp: 10/204b lim: 40 exec/s: 0 rss: 74Mb L: 11/40 MS: 1 CopyPart- 00:07:12.356 [2024-11-27 15:06:37.658865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.356 [2024-11-27 15:06:37.658890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.615 #20 NEW cov: 12488 ft: 15157 corp: 11/216b lim: 40 exec/s: 0 rss: 74Mb L: 12/40 MS: 1 InsertByte- 00:07:12.615 [2024-11-27 15:06:37.719013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.615 [2024-11-27 15:06:37.719038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.615 #21 NEW cov: 12488 ft: 15194 corp: 12/228b lim: 40 exec/s: 0 rss: 74Mb L: 12/40 MS: 1 CrossOver- 00:07:12.615 [2024-11-27 15:06:37.779219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.615 [2024-11-27 15:06:37.779245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.615 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:12.615 #22 NEW cov: 12511 ft: 15235 corp: 13/239b lim: 40 exec/s: 0 rss: 74Mb L: 11/40 MS: 1 ShuffleBytes- 00:07:12.615 [2024-11-27 15:06:37.819233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.615 [2024-11-27 15:06:37.819258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.615 #24 NEW cov: 12511 ft: 15319 corp: 14/252b lim: 40 exec/s: 0 rss: 74Mb L: 13/40 MS: 2 EraseBytes-CrossOver- 00:07:12.615 [2024-11-27 15:06:37.879464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7171 cdw11:710d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.615 [2024-11-27 15:06:37.879490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.615 #25 NEW cov: 12511 ft: 15338 corp: 15/265b lim: 40 exec/s: 25 rss: 74Mb L: 13/40 MS: 1 CMP- DE: "\015\000\000\000\000\000\000\000"- 00:07:12.615 [2024-11-27 15:06:37.939652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0aa671 cdw11:710d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.615 [2024-11-27 15:06:37.939677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.875 #26 NEW cov: 12511 ft: 15347 corp: 16/278b lim: 40 exec/s: 26 rss: 74Mb L: 13/40 MS: 1 ChangeByte- 00:07:12.875 [2024-11-27 15:06:37.999985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7171 cdw11:710d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.875 [2024-11-27 15:06:38.000010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.875 [2024-11-27 15:06:38.000069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.875 [2024-11-27 15:06:38.000083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.875 #27 NEW cov: 12511 ft: 15388 corp: 17/299b lim: 40 exec/s: 27 rss: 74Mb L: 21/40 MS: 1 PersAutoDict- DE: "\015\000\000\000\000\000\000\000"- 00:07:12.875 [2024-11-27 15:06:38.039940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0aa600 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.875 [2024-11-27 15:06:38.039964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.875 #28 NEW cov: 12511 ft: 15395 corp: 18/307b lim: 40 exec/s: 28 rss: 74Mb L: 8/40 MS: 1 EraseBytes- 00:07:12.875 [2024-11-27 15:06:38.100189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7171 cdw11:7171710a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.875 [2024-11-27 15:06:38.100214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.875 [2024-11-27 15:06:38.100287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3d717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.875 [2024-11-27 15:06:38.100302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.875 #29 NEW cov: 12511 ft: 15495 corp: 19/329b lim: 40 exec/s: 29 rss: 74Mb L: 22/40 MS: 1 ChangeByte- 00:07:12.875 [2024-11-27 15:06:38.140320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:010a7171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.875 [2024-11-27 15:06:38.140345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.875 [2024-11-27 15:06:38.140420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:7171710a cdw11:0a717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.875 [2024-11-27 15:06:38.140435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.875 #30 NEW cov: 12511 ft: 15515 corp: 20/346b lim: 40 exec/s: 30 rss: 74Mb L: 17/40 MS: 1 CrossOver- 00:07:12.875 [2024-11-27 15:06:38.180285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.875 [2024-11-27 15:06:38.180310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.875 #31 NEW cov: 12511 ft: 15546 corp: 21/357b lim: 40 exec/s: 31 rss: 74Mb L: 11/40 MS: 1 ShuffleBytes- 00:07:13.134 [2024-11-27 15:06:38.220527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7171 cdw11:7171710a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.134 [2024-11-27 15:06:38.220551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.134 [2024-11-27 15:06:38.220631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0a717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.134 [2024-11-27 15:06:38.220646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.134 #32 NEW cov: 12511 ft: 15670 corp: 22/373b lim: 40 exec/s: 32 rss: 74Mb L: 16/40 MS: 1 EraseBytes- 00:07:13.134 [2024-11-27 15:06:38.260559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a71710a cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.134 [2024-11-27 15:06:38.260584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.134 #33 NEW cov: 12511 ft: 15682 corp: 23/385b lim: 40 exec/s: 33 rss: 74Mb L: 12/40 MS: 1 CrossOver- 00:07:13.134 [2024-11-27 15:06:38.320902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:010a7a71 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.134 [2024-11-27 15:06:38.320927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.134 [2024-11-27 15:06:38.321001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:0a0a7171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.134 [2024-11-27 15:06:38.321015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.134 #34 NEW cov: 12511 ft: 15687 corp: 24/403b lim: 40 exec/s: 34 rss: 74Mb L: 18/40 MS: 1 InsertByte- 00:07:13.134 [2024-11-27 15:06:38.380903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0d000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.134 [2024-11-27 15:06:38.380928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.134 #35 NEW cov: 12511 ft: 15703 corp: 25/411b lim: 40 exec/s: 35 rss: 74Mb L: 8/40 MS: 1 PersAutoDict- DE: "\015\000\000\000\000\000\000\000"- 00:07:13.134 [2024-11-27 15:06:38.441241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:71710d00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.134 [2024-11-27 15:06:38.441267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.134 [2024-11-27 15:06:38.441326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000d00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.134 [2024-11-27 15:06:38.441341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.394 #36 NEW cov: 12511 ft: 15717 corp: 26/429b lim: 40 exec/s: 36 rss: 74Mb L: 18/40 MS: 1 CrossOver- 00:07:13.394 [2024-11-27 15:06:38.501218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.394 [2024-11-27 15:06:38.501242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.394 #37 NEW cov: 12511 ft: 15781 corp: 27/442b lim: 40 exec/s: 37 rss: 74Mb L: 13/40 MS: 1 InsertByte- 00:07:13.394 [2024-11-27 15:06:38.541285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a710d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.394 [2024-11-27 15:06:38.541311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.394 #38 NEW cov: 12511 ft: 15844 corp: 28/455b lim: 40 exec/s: 38 rss: 74Mb L: 13/40 MS: 1 PersAutoDict- DE: "\015\000\000\000\000\000\000\000"- 00:07:13.394 [2024-11-27 15:06:38.581377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0aa671 cdw11:710d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.394 [2024-11-27 15:06:38.581418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.394 #39 NEW cov: 12511 ft: 15856 corp: 29/468b lim: 40 exec/s: 39 rss: 74Mb L: 13/40 MS: 1 CopyPart- 00:07:13.394 [2024-11-27 15:06:38.621757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7171 cdw11:7171713d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.394 [2024-11-27 15:06:38.621781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.394 [2024-11-27 15:06:38.621855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0a717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.394 [2024-11-27 15:06:38.621869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.394 #40 NEW cov: 12511 ft: 15863 corp: 30/490b lim: 40 exec/s: 40 rss: 75Mb L: 22/40 MS: 1 ShuffleBytes- 00:07:13.394 [2024-11-27 15:06:38.682246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.394 [2024-11-27 15:06:38.682271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.394 [2024-11-27 15:06:38.682331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:71ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.394 [2024-11-27 15:06:38.682346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.394 [2024-11-27 15:06:38.682407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.394 [2024-11-27 15:06:38.682421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.394 [2024-11-27 15:06:38.682480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff71 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.394 [2024-11-27 15:06:38.682494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.394 #41 NEW cov: 12511 ft: 15883 corp: 31/523b lim: 40 exec/s: 41 rss: 75Mb L: 33/40 MS: 1 InsertRepeatedBytes- 00:07:13.394 [2024-11-27 15:06:38.721781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.394 [2024-11-27 15:06:38.721806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.654 #42 NEW cov: 12511 ft: 15893 corp: 32/534b lim: 40 exec/s: 42 rss: 75Mb L: 11/40 MS: 1 EraseBytes- 00:07:13.654 [2024-11-27 15:06:38.762127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.654 [2024-11-27 15:06:38.762153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.654 [2024-11-27 15:06:38.762229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:717171ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.654 [2024-11-27 15:06:38.762244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.654 #43 NEW cov: 12511 ft: 15894 corp: 33/557b lim: 40 exec/s: 43 rss: 75Mb L: 23/40 MS: 1 InsertRepeatedBytes- 00:07:13.654 [2024-11-27 15:06:38.802048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.654 [2024-11-27 15:06:38.802073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.654 #44 NEW cov: 12511 ft: 15927 corp: 34/568b lim: 40 exec/s: 44 rss: 75Mb L: 11/40 MS: 1 ShuffleBytes- 00:07:13.654 [2024-11-27 15:06:38.842173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f3ffffff cdw11:fffffffc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.654 [2024-11-27 15:06:38.842199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.654 #45 NEW cov: 12511 ft: 15935 corp: 35/576b lim: 40 exec/s: 45 rss: 75Mb L: 8/40 MS: 1 ChangeBinInt- 00:07:13.654 [2024-11-27 15:06:38.882616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:010a0909 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.654 [2024-11-27 15:06:38.882640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.654 [2024-11-27 15:06:38.882661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:09090909 cdw11:7a717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.654 [2024-11-27 15:06:38.882672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.654 [2024-11-27 15:06:38.882732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:71710a0a cdw11:7171710a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.654 [2024-11-27 15:06:38.882746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.654 #46 NEW cov: 12511 ft: 15950 corp: 36/600b lim: 40 exec/s: 23 rss: 75Mb L: 24/40 MS: 1 InsertRepeatedBytes- 00:07:13.654 #46 DONE cov: 12511 ft: 15950 corp: 36/600b lim: 40 exec/s: 23 rss: 75Mb 00:07:13.654 ###### Recommended dictionary. ###### 00:07:13.654 "\001\000\000\001" # Uses: 0 00:07:13.654 "\015\000\000\000\000\000\000\000" # Uses: 3 00:07:13.654 ###### End of recommended dictionary. ###### 00:07:13.654 Done 46 runs in 2 second(s) 00:07:13.914 15:06:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:07:13.914 15:06:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:13.914 15:06:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:13.914 15:06:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:13.914 15:06:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:13.914 15:06:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:13.914 15:06:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:13.914 15:06:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:13.914 15:06:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:13.914 15:06:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:13.914 15:06:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:13.914 15:06:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:07:13.914 15:06:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:07:13.914 15:06:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:13.914 15:06:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:13.914 15:06:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:13.914 15:06:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:13.914 15:06:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:13.914 15:06:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:07:13.914 [2024-11-27 15:06:39.055343] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:13.914 [2024-11-27 15:06:39.055406] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2371123 ] 00:07:13.914 [2024-11-27 15:06:39.236121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.172 [2024-11-27 15:06:39.270229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.172 [2024-11-27 15:06:39.329580] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.172 [2024-11-27 15:06:39.345961] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:14.172 INFO: Running with entropic power schedule (0xFF, 100). 00:07:14.172 INFO: Seed: 2291730339 00:07:14.172 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:07:14.172 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:07:14.172 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:14.172 INFO: A corpus is not provided, starting from an empty corpus 00:07:14.172 #2 INITED exec/s: 0 rss: 66Mb 00:07:14.172 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:14.172 This may also happen if the target rejected all inputs we tried so far 00:07:14.172 [2024-11-27 15:06:39.401670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.173 [2024-11-27 15:06:39.401700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.173 [2024-11-27 15:06:39.401759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.173 [2024-11-27 15:06:39.401772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.173 [2024-11-27 15:06:39.401829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.173 [2024-11-27 15:06:39.401842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.173 [2024-11-27 15:06:39.401897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.173 [2024-11-27 15:06:39.401911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.431 NEW_FUNC[1/716]: 0x44e138 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:14.431 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:14.431 #5 NEW cov: 12272 ft: 12271 corp: 2/35b lim: 40 exec/s: 0 rss: 73Mb L: 34/34 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:07:14.431 [2024-11-27 15:06:39.742538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.431 [2024-11-27 15:06:39.742569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.431 [2024-11-27 15:06:39.742632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.431 [2024-11-27 15:06:39.742646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.431 [2024-11-27 15:06:39.742700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.431 [2024-11-27 15:06:39.742713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.431 [2024-11-27 15:06:39.742770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.431 [2024-11-27 15:06:39.742783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.691 #11 NEW cov: 12385 ft: 12972 corp: 3/67b lim: 40 exec/s: 0 rss: 73Mb L: 32/34 MS: 1 EraseBytes- 00:07:14.691 [2024-11-27 15:06:39.802656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff20ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:39.802684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.691 [2024-11-27 15:06:39.802743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:39.802757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.691 [2024-11-27 15:06:39.802815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:39.802832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.691 [2024-11-27 15:06:39.802890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:39.802903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.691 #12 NEW cov: 12391 ft: 13282 corp: 4/101b lim: 40 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 ChangeByte- 00:07:14.691 [2024-11-27 15:06:39.842731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff20ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:39.842757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.691 [2024-11-27 15:06:39.842813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:39.842827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.691 [2024-11-27 15:06:39.842882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:39.842897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.691 [2024-11-27 15:06:39.842950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:39.842963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.691 #13 NEW cov: 12476 ft: 13535 corp: 5/138b lim: 40 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 CopyPart- 00:07:14.691 [2024-11-27 15:06:39.902913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff20ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:39.902938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.691 [2024-11-27 15:06:39.902998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:39.903012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.691 [2024-11-27 15:06:39.903071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:39.903084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.691 [2024-11-27 15:06:39.903143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:39.903157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.691 #14 NEW cov: 12476 ft: 13611 corp: 6/177b lim: 40 exec/s: 0 rss: 73Mb L: 39/39 MS: 1 CrossOver- 00:07:14.691 [2024-11-27 15:06:39.942882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:39.942908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.691 [2024-11-27 15:06:39.942973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:39.942988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.691 [2024-11-27 15:06:39.943042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:39.943056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.691 #15 NEW cov: 12476 ft: 14157 corp: 7/207b lim: 40 exec/s: 0 rss: 73Mb L: 30/39 MS: 1 EraseBytes- 00:07:14.691 [2024-11-27 15:06:39.983120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:39.983146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.691 [2024-11-27 15:06:39.983204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:39.983220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.691 [2024-11-27 15:06:39.983277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ff540000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:39.983291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.691 [2024-11-27 15:06:39.983349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:39.983363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.691 #16 NEW cov: 12476 ft: 14288 corp: 8/241b lim: 40 exec/s: 0 rss: 73Mb L: 34/39 MS: 1 CMP- DE: "T\000\000\000"- 00:07:14.691 [2024-11-27 15:06:40.023280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:40.023307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.691 [2024-11-27 15:06:40.023367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:40.023381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.691 [2024-11-27 15:06:40.023440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ff540000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:40.023455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.691 [2024-11-27 15:06:40.023511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.691 [2024-11-27 15:06:40.023526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.951 #17 NEW cov: 12476 ft: 14362 corp: 9/275b lim: 40 exec/s: 0 rss: 73Mb L: 34/39 MS: 1 ShuffleBytes- 00:07:14.951 [2024-11-27 15:06:40.083528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.951 [2024-11-27 15:06:40.083559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.951 [2024-11-27 15:06:40.083623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.951 [2024-11-27 15:06:40.083638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.951 [2024-11-27 15:06:40.083698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.951 [2024-11-27 15:06:40.083711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.951 [2024-11-27 15:06:40.083777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.951 [2024-11-27 15:06:40.083790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.951 #18 NEW cov: 12476 ft: 14438 corp: 10/311b lim: 40 exec/s: 0 rss: 74Mb L: 36/39 MS: 1 CopyPart- 00:07:14.951 [2024-11-27 15:06:40.143797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.951 [2024-11-27 15:06:40.143826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.951 [2024-11-27 15:06:40.143885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.951 [2024-11-27 15:06:40.143900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.951 [2024-11-27 15:06:40.143957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.951 [2024-11-27 15:06:40.143971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.951 [2024-11-27 15:06:40.144028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff540000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.951 [2024-11-27 15:06:40.144042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.951 [2024-11-27 15:06:40.144098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00ffffff cdw11:ffffff8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.951 [2024-11-27 15:06:40.144111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:14.951 #19 NEW cov: 12476 ft: 14520 corp: 11/351b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 PersAutoDict- DE: "T\000\000\000"- 00:07:14.951 [2024-11-27 15:06:40.203509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff20ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.951 [2024-11-27 15:06:40.203535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.951 [2024-11-27 15:06:40.203592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.951 [2024-11-27 15:06:40.203610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.951 #20 NEW cov: 12476 ft: 14811 corp: 12/370b lim: 40 exec/s: 0 rss: 74Mb L: 19/40 MS: 1 EraseBytes- 00:07:14.951 [2024-11-27 15:06:40.263940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.951 [2024-11-27 15:06:40.263969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.951 [2024-11-27 15:06:40.264031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff54 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.951 [2024-11-27 15:06:40.264046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.951 [2024-11-27 15:06:40.264104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.951 [2024-11-27 15:06:40.264118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.951 [2024-11-27 15:06:40.264178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.951 [2024-11-27 15:06:40.264192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.212 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:15.212 #21 NEW cov: 12499 ft: 14853 corp: 13/402b lim: 40 exec/s: 0 rss: 74Mb L: 32/40 MS: 1 EraseBytes- 00:07:15.212 [2024-11-27 15:06:40.324111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff20ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.212 [2024-11-27 15:06:40.324137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.212 [2024-11-27 15:06:40.324195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.212 [2024-11-27 15:06:40.324210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.213 [2024-11-27 15:06:40.324266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.324280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.213 [2024-11-27 15:06:40.324337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.324352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.213 #22 NEW cov: 12499 ft: 14894 corp: 14/439b lim: 40 exec/s: 0 rss: 74Mb L: 37/40 MS: 1 ChangeBit- 00:07:15.213 [2024-11-27 15:06:40.364168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff20ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.364194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.213 [2024-11-27 15:06:40.364250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.364265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.213 [2024-11-27 15:06:40.364325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.364339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.213 [2024-11-27 15:06:40.364396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.364413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.213 #23 NEW cov: 12499 ft: 14911 corp: 15/473b lim: 40 exec/s: 23 rss: 74Mb L: 34/40 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:15.213 [2024-11-27 15:06:40.404305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.404331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.213 [2024-11-27 15:06:40.404388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.404403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.213 [2024-11-27 15:06:40.404459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ff540000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.404473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.213 [2024-11-27 15:06:40.404533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.404547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.213 #24 NEW cov: 12499 ft: 14913 corp: 16/507b lim: 40 exec/s: 24 rss: 74Mb L: 34/40 MS: 1 ShuffleBytes- 00:07:15.213 [2024-11-27 15:06:40.444417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff20ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.444443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.213 [2024-11-27 15:06:40.444503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:bfffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.444518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.213 [2024-11-27 15:06:40.444572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.444587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.213 [2024-11-27 15:06:40.444649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.444664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.213 #25 NEW cov: 12499 ft: 14997 corp: 17/546b lim: 40 exec/s: 25 rss: 74Mb L: 39/40 MS: 1 ChangeBit- 00:07:15.213 [2024-11-27 15:06:40.504621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.504647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.213 [2024-11-27 15:06:40.504706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.504720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.213 [2024-11-27 15:06:40.504783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.504797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.213 [2024-11-27 15:06:40.504854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:09ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.504868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.213 #26 NEW cov: 12499 ft: 15033 corp: 18/582b lim: 40 exec/s: 26 rss: 74Mb L: 36/40 MS: 1 ChangeBinInt- 00:07:15.213 [2024-11-27 15:06:40.544716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff20ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.544742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.213 [2024-11-27 15:06:40.544800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.544815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.213 [2024-11-27 15:06:40.544876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.544891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.213 [2024-11-27 15:06:40.544950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.213 [2024-11-27 15:06:40.544964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.473 #27 NEW cov: 12499 ft: 15053 corp: 19/619b lim: 40 exec/s: 27 rss: 74Mb L: 37/40 MS: 1 ShuffleBytes- 00:07:15.473 [2024-11-27 15:06:40.584987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff20ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.473 [2024-11-27 15:06:40.585012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.473 [2024-11-27 15:06:40.585072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:bfffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.473 [2024-11-27 15:06:40.585086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.474 [2024-11-27 15:06:40.585146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.474 [2024-11-27 15:06:40.585161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.474 [2024-11-27 15:06:40.585221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.474 [2024-11-27 15:06:40.585235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.474 [2024-11-27 15:06:40.585296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.474 [2024-11-27 15:06:40.585310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.474 #28 NEW cov: 12499 ft: 15057 corp: 20/659b lim: 40 exec/s: 28 rss: 74Mb L: 40/40 MS: 1 CopyPart- 00:07:15.474 [2024-11-27 15:06:40.644990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:54000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.474 [2024-11-27 15:06:40.645016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.474 [2024-11-27 15:06:40.645074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.474 [2024-11-27 15:06:40.645089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.474 [2024-11-27 15:06:40.645144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.474 [2024-11-27 15:06:40.645159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.474 [2024-11-27 15:06:40.645216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.474 [2024-11-27 15:06:40.645230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.474 #29 NEW cov: 12499 ft: 15073 corp: 21/693b lim: 40 exec/s: 29 rss: 74Mb L: 34/40 MS: 1 PersAutoDict- DE: "T\000\000\000"- 00:07:15.474 [2024-11-27 15:06:40.685093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff20ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.474 [2024-11-27 15:06:40.685119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.474 [2024-11-27 15:06:40.685177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.474 [2024-11-27 15:06:40.685192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.474 [2024-11-27 15:06:40.685249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.474 [2024-11-27 15:06:40.685263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.474 [2024-11-27 15:06:40.685320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.474 [2024-11-27 15:06:40.685334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.474 #30 NEW cov: 12499 ft: 15095 corp: 22/727b lim: 40 exec/s: 30 rss: 74Mb L: 34/40 MS: 1 CopyPart- 00:07:15.474 [2024-11-27 15:06:40.745055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0aa2a2a2 cdw11:a2a2a2a2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.474 [2024-11-27 15:06:40.745080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.474 [2024-11-27 15:06:40.745137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:a2a2a2a2 cdw11:a2a2a2a2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.474 [2024-11-27 15:06:40.745151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.474 #32 NEW cov: 12499 ft: 15102 corp: 23/744b lim: 40 exec/s: 32 rss: 74Mb L: 17/40 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:15.474 [2024-11-27 15:06:40.785271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.474 [2024-11-27 15:06:40.785301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.474 [2024-11-27 15:06:40.785362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.474 [2024-11-27 15:06:40.785376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.474 [2024-11-27 15:06:40.785433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ff5400ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.474 [2024-11-27 15:06:40.785447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.474 #33 NEW cov: 12499 ft: 15138 corp: 24/770b lim: 40 exec/s: 33 rss: 74Mb L: 26/40 MS: 1 EraseBytes- 00:07:15.733 [2024-11-27 15:06:40.825520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff20ffff cdw11:ff0004ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.733 [2024-11-27 15:06:40.825546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.733 [2024-11-27 15:06:40.825611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.733 [2024-11-27 15:06:40.825626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.733 [2024-11-27 15:06:40.825684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.733 [2024-11-27 15:06:40.825698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.733 [2024-11-27 15:06:40.825757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.733 [2024-11-27 15:06:40.825770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.733 #34 NEW cov: 12499 ft: 15152 corp: 25/809b lim: 40 exec/s: 34 rss: 74Mb L: 39/40 MS: 1 CMP- DE: "\000\004"- 00:07:15.733 [2024-11-27 15:06:40.885714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff20ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.733 [2024-11-27 15:06:40.885739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.733 [2024-11-27 15:06:40.885811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.733 [2024-11-27 15:06:40.885825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.733 [2024-11-27 15:06:40.885881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffbfff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.733 [2024-11-27 15:06:40.885894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.733 [2024-11-27 15:06:40.885952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.733 [2024-11-27 15:06:40.885965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.733 #35 NEW cov: 12499 ft: 15162 corp: 26/848b lim: 40 exec/s: 35 rss: 74Mb L: 39/40 MS: 1 ChangeBit- 00:07:15.733 [2024-11-27 15:06:40.925852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff20ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.733 [2024-11-27 15:06:40.925880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.733 [2024-11-27 15:06:40.925938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:38ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.733 [2024-11-27 15:06:40.925952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.733 [2024-11-27 15:06:40.926007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.733 [2024-11-27 15:06:40.926020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.733 [2024-11-27 15:06:40.926075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.733 [2024-11-27 15:06:40.926088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.733 [2024-11-27 15:06:40.926144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.733 [2024-11-27 15:06:40.926157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.733 #36 NEW cov: 12499 ft: 15175 corp: 27/888b lim: 40 exec/s: 36 rss: 75Mb L: 40/40 MS: 1 ChangeByte- 00:07:15.733 [2024-11-27 15:06:40.985683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff51975c cdw11:e0a67592 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.733 [2024-11-27 15:06:40.985708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.733 [2024-11-27 15:06:40.985770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.733 [2024-11-27 15:06:40.985784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.733 #37 NEW cov: 12499 ft: 15229 corp: 28/907b lim: 40 exec/s: 37 rss: 75Mb L: 19/40 MS: 1 CMP- DE: "Q\227\\\340\246u\222\000"- 00:07:15.733 [2024-11-27 15:06:41.046111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.734 [2024-11-27 15:06:41.046135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.734 [2024-11-27 15:06:41.046208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.734 [2024-11-27 15:06:41.046222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.734 [2024-11-27 15:06:41.046277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.734 [2024-11-27 15:06:41.046290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.734 [2024-11-27 15:06:41.046346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:09ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.734 [2024-11-27 15:06:41.046359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.993 #38 NEW cov: 12499 ft: 15255 corp: 29/944b lim: 40 exec/s: 38 rss: 75Mb L: 37/40 MS: 1 InsertByte- 00:07:15.993 [2024-11-27 15:06:41.106098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.993 [2024-11-27 15:06:41.106123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.993 [2024-11-27 15:06:41.106180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff540000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.993 [2024-11-27 15:06:41.106194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.993 [2024-11-27 15:06:41.106250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.993 [2024-11-27 15:06:41.106263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.993 #39 NEW cov: 12499 ft: 15282 corp: 30/974b lim: 40 exec/s: 39 rss: 75Mb L: 30/40 MS: 1 EraseBytes- 00:07:15.993 [2024-11-27 15:06:41.146335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff20ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.993 [2024-11-27 15:06:41.146360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.993 [2024-11-27 15:06:41.146435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.993 [2024-11-27 15:06:41.146449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.993 [2024-11-27 15:06:41.146506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff540000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.993 [2024-11-27 15:06:41.146520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.993 [2024-11-27 15:06:41.146578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.993 [2024-11-27 15:06:41.146591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.993 #40 NEW cov: 12499 ft: 15291 corp: 31/1008b lim: 40 exec/s: 40 rss: 75Mb L: 34/40 MS: 1 CrossOver- 00:07:15.993 [2024-11-27 15:06:41.186513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff20ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.993 [2024-11-27 15:06:41.186537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.993 [2024-11-27 15:06:41.186596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.993 [2024-11-27 15:06:41.186614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.993 [2024-11-27 15:06:41.186669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.993 [2024-11-27 15:06:41.186682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.993 [2024-11-27 15:06:41.186738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffff29ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.993 [2024-11-27 15:06:41.186750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.993 #41 NEW cov: 12499 ft: 15301 corp: 32/1047b lim: 40 exec/s: 41 rss: 75Mb L: 39/40 MS: 1 ChangeByte- 00:07:15.993 [2024-11-27 15:06:41.226620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff20f8ff cdw11:ff0004ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.993 [2024-11-27 15:06:41.226645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.993 [2024-11-27 15:06:41.226722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.993 [2024-11-27 15:06:41.226736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.993 [2024-11-27 15:06:41.226795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.993 [2024-11-27 15:06:41.226809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.993 [2024-11-27 15:06:41.226865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.993 [2024-11-27 15:06:41.226879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.993 #42 NEW cov: 12499 ft: 15307 corp: 33/1086b lim: 40 exec/s: 42 rss: 75Mb L: 39/40 MS: 1 ChangeBinInt- 00:07:15.993 [2024-11-27 15:06:41.286873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff20f8ff cdw11:ff0004ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.993 [2024-11-27 15:06:41.286898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.993 [2024-11-27 15:06:41.286973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.993 [2024-11-27 15:06:41.286987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.993 [2024-11-27 15:06:41.287047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.993 [2024-11-27 15:06:41.287060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.993 [2024-11-27 15:06:41.287118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.993 [2024-11-27 15:06:41.287131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.993 #43 NEW cov: 12499 ft: 15316 corp: 34/1125b lim: 40 exec/s: 43 rss: 75Mb L: 39/40 MS: 1 ShuffleBytes- 00:07:16.253 [2024-11-27 15:06:41.347055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff20ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.253 [2024-11-27 15:06:41.347080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.253 [2024-11-27 15:06:41.347138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:38ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.253 [2024-11-27 15:06:41.347152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.253 [2024-11-27 15:06:41.347208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.253 [2024-11-27 15:06:41.347221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.253 [2024-11-27 15:06:41.347281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0006ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.253 [2024-11-27 15:06:41.347294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.253 [2024-11-27 15:06:41.347351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff8a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.253 [2024-11-27 15:06:41.347364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:16.253 #44 NEW cov: 12499 ft: 15345 corp: 35/1165b lim: 40 exec/s: 22 rss: 75Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:16.253 #44 DONE cov: 12499 ft: 15345 corp: 35/1165b lim: 40 exec/s: 22 rss: 75Mb 00:07:16.253 ###### Recommended dictionary. ###### 00:07:16.253 "T\000\000\000" # Uses: 2 00:07:16.253 "\000\000\000\000\000\000\000\000" # Uses: 0 00:07:16.253 "\000\004" # Uses: 0 00:07:16.253 "Q\227\\\340\246u\222\000" # Uses: 0 00:07:16.253 ###### End of recommended dictionary. ###### 00:07:16.253 Done 44 runs in 2 second(s) 00:07:16.253 15:06:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:16.253 15:06:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:16.253 15:06:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:16.253 15:06:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:16.253 15:06:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:16.253 15:06:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:16.253 15:06:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:16.253 15:06:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:16.253 15:06:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:16.253 15:06:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:16.253 15:06:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:16.253 15:06:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:16.253 15:06:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:16.253 15:06:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:16.253 15:06:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:16.253 15:06:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:16.253 15:06:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:16.253 15:06:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:16.254 15:06:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:16.254 [2024-11-27 15:06:41.532415] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:16.254 [2024-11-27 15:06:41.532485] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2371413 ] 00:07:16.513 [2024-11-27 15:06:41.721857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.513 [2024-11-27 15:06:41.755617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.513 [2024-11-27 15:06:41.815218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.513 [2024-11-27 15:06:41.831574] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:16.513 INFO: Running with entropic power schedule (0xFF, 100). 00:07:16.513 INFO: Seed: 480766718 00:07:16.771 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:07:16.771 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:07:16.771 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:16.771 INFO: A corpus is not provided, starting from an empty corpus 00:07:16.771 #2 INITED exec/s: 0 rss: 65Mb 00:07:16.771 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:16.771 This may also happen if the target rejected all inputs we tried so far 00:07:16.771 [2024-11-27 15:06:41.886916] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.771 [2024-11-27 15:06:41.886949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.030 NEW_FUNC[1/717]: 0x44fd08 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:17.030 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:17.030 #7 NEW cov: 12266 ft: 12262 corp: 2/12b lim: 35 exec/s: 0 rss: 73Mb L: 11/11 MS: 5 ChangeBit-ChangeBit-ChangeBinInt-ChangeByte-InsertRepeatedBytes- 00:07:17.030 [2024-11-27 15:06:42.218010] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.030 [2024-11-27 15:06:42.218051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.030 [2024-11-27 15:06:42.218122] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.030 [2024-11-27 15:06:42.218140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.030 #11 NEW cov: 12386 ft: 13365 corp: 3/27b lim: 35 exec/s: 0 rss: 73Mb L: 15/15 MS: 4 CrossOver-ChangeByte-ChangeBinInt-InsertRepeatedBytes- 00:07:17.030 [2024-11-27 15:06:42.257842] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.030 [2024-11-27 15:06:42.257870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.030 #12 NEW cov: 12392 ft: 13542 corp: 4/38b lim: 35 exec/s: 0 rss: 73Mb L: 11/15 MS: 1 ShuffleBytes- 00:07:17.030 [2024-11-27 15:06:42.318352] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.030 [2024-11-27 15:06:42.318376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.030 [2024-11-27 15:06:42.318451] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.030 [2024-11-27 15:06:42.318468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.030 [2024-11-27 15:06:42.318525] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.030 [2024-11-27 15:06:42.318539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.031 #13 NEW cov: 12477 ft: 13926 corp: 5/61b lim: 35 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 CMP- DE: "i4\242\277\242u\222\000"- 00:07:17.290 [2024-11-27 15:06:42.378220] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.290 [2024-11-27 15:06:42.378250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.290 #14 NEW cov: 12477 ft: 14107 corp: 6/72b lim: 35 exec/s: 0 rss: 73Mb L: 11/23 MS: 1 PersAutoDict- DE: "i4\242\277\242u\222\000"- 00:07:17.290 [2024-11-27 15:06:42.418627] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.290 [2024-11-27 15:06:42.418652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.290 [2024-11-27 15:06:42.418730] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.290 [2024-11-27 15:06:42.418748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.290 [2024-11-27 15:06:42.418810] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.290 [2024-11-27 15:06:42.418824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.290 #15 NEW cov: 12477 ft: 14174 corp: 7/96b lim: 35 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 InsertByte- 00:07:17.290 [2024-11-27 15:06:42.478460] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.290 [2024-11-27 15:06:42.478488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.290 #16 NEW cov: 12477 ft: 14443 corp: 8/107b lim: 35 exec/s: 0 rss: 74Mb L: 11/24 MS: 1 ChangeByte- 00:07:17.290 [2024-11-27 15:06:42.538782] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.290 [2024-11-27 15:06:42.538810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.290 [2024-11-27 15:06:42.538889] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.290 [2024-11-27 15:06:42.538906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.290 #17 NEW cov: 12477 ft: 14545 corp: 9/126b lim: 35 exec/s: 0 rss: 74Mb L: 19/24 MS: 1 CrossOver- 00:07:17.290 [2024-11-27 15:06:42.598775] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.290 [2024-11-27 15:06:42.598801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.290 #18 NEW cov: 12477 ft: 14575 corp: 10/137b lim: 35 exec/s: 0 rss: 74Mb L: 11/24 MS: 1 ChangeByte- 00:07:17.549 [2024-11-27 15:06:42.639185] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.549 [2024-11-27 15:06:42.639209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.549 [2024-11-27 15:06:42.639270] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.549 [2024-11-27 15:06:42.639286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.549 [2024-11-27 15:06:42.639344] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.549 [2024-11-27 15:06:42.639357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.549 #19 NEW cov: 12477 ft: 14668 corp: 11/162b lim: 35 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 InsertByte- 00:07:17.549 [2024-11-27 15:06:42.699079] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.549 [2024-11-27 15:06:42.699108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.549 #20 NEW cov: 12477 ft: 14735 corp: 12/173b lim: 35 exec/s: 0 rss: 74Mb L: 11/25 MS: 1 CopyPart- 00:07:17.549 [2024-11-27 15:06:42.739365] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.549 [2024-11-27 15:06:42.739393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.549 [2024-11-27 15:06:42.739456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.549 [2024-11-27 15:06:42.739472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.549 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:17.549 #21 NEW cov: 12500 ft: 14799 corp: 13/193b lim: 35 exec/s: 0 rss: 74Mb L: 20/25 MS: 1 InsertByte- 00:07:17.549 [2024-11-27 15:06:42.799505] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.549 [2024-11-27 15:06:42.799530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.549 [2024-11-27 15:06:42.799594] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.549 [2024-11-27 15:06:42.799614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.549 #22 NEW cov: 12500 ft: 14886 corp: 14/213b lim: 35 exec/s: 0 rss: 74Mb L: 20/25 MS: 1 ChangeBit- 00:07:17.549 [2024-11-27 15:06:42.859729] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000006e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.549 [2024-11-27 15:06:42.859753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.549 NEW_FUNC[1/2]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:17.808 NEW_FUNC[2/2]: 0x138ebc8 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1768 00:07:17.809 #23 NEW cov: 12533 ft: 14969 corp: 15/230b lim: 35 exec/s: 23 rss: 74Mb L: 17/25 MS: 1 InsertRepeatedBytes- 00:07:17.809 [2024-11-27 15:06:42.909862] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.809 [2024-11-27 15:06:42.909889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.809 [2024-11-27 15:06:42.909967] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.809 [2024-11-27 15:06:42.909984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.809 #24 NEW cov: 12533 ft: 15003 corp: 16/248b lim: 35 exec/s: 24 rss: 74Mb L: 18/25 MS: 1 EraseBytes- 00:07:17.809 [2024-11-27 15:06:42.970012] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.809 [2024-11-27 15:06:42.970039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.809 [2024-11-27 15:06:42.970116] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.809 [2024-11-27 15:06:42.970134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.809 #25 NEW cov: 12533 ft: 15012 corp: 17/266b lim: 35 exec/s: 25 rss: 75Mb L: 18/25 MS: 1 CopyPart- 00:07:17.809 [2024-11-27 15:06:43.030485] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.809 [2024-11-27 15:06:43.030513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.809 [2024-11-27 15:06:43.030591] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.809 [2024-11-27 15:06:43.030612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.809 [2024-11-27 15:06:43.030682] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.809 [2024-11-27 15:06:43.030698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.809 [2024-11-27 15:06:43.030760] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.809 [2024-11-27 15:06:43.030775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.809 #26 NEW cov: 12533 ft: 15306 corp: 18/294b lim: 35 exec/s: 26 rss: 75Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:07:17.809 [2024-11-27 15:06:43.090166] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.809 [2024-11-27 15:06:43.090194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.809 #27 NEW cov: 12533 ft: 15340 corp: 19/305b lim: 35 exec/s: 27 rss: 75Mb L: 11/28 MS: 1 CMP- DE: " \000\000\000"- 00:07:17.809 [2024-11-27 15:06:43.130614] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.809 [2024-11-27 15:06:43.130643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.809 [2024-11-27 15:06:43.130703] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.809 [2024-11-27 15:06:43.130719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.809 [2024-11-27 15:06:43.130778] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.809 [2024-11-27 15:06:43.130791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.068 #28 NEW cov: 12533 ft: 15400 corp: 20/326b lim: 35 exec/s: 28 rss: 75Mb L: 21/28 MS: 1 CrossOver- 00:07:18.068 [2024-11-27 15:06:43.190649] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.068 [2024-11-27 15:06:43.190677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.068 [2024-11-27 15:06:43.190740] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.068 [2024-11-27 15:06:43.190757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.068 #29 NEW cov: 12533 ft: 15430 corp: 21/344b lim: 35 exec/s: 29 rss: 75Mb L: 18/28 MS: 1 EraseBytes- 00:07:18.068 [2024-11-27 15:06:43.250617] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.068 [2024-11-27 15:06:43.250645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.068 #30 NEW cov: 12533 ft: 15468 corp: 22/355b lim: 35 exec/s: 30 rss: 75Mb L: 11/28 MS: 1 CMP- DE: "\037\000"- 00:07:18.068 [2024-11-27 15:06:43.290908] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.068 [2024-11-27 15:06:43.290934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.068 [2024-11-27 15:06:43.291021] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.068 [2024-11-27 15:06:43.291038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.068 #31 NEW cov: 12533 ft: 15525 corp: 23/375b lim: 35 exec/s: 31 rss: 75Mb L: 20/28 MS: 1 ChangeBit- 00:07:18.068 [2024-11-27 15:06:43.331022] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.068 [2024-11-27 15:06:43.331051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.068 [2024-11-27 15:06:43.331113] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000bf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.068 [2024-11-27 15:06:43.331130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.068 #32 NEW cov: 12533 ft: 15596 corp: 24/395b lim: 35 exec/s: 32 rss: 75Mb L: 20/28 MS: 1 PersAutoDict- DE: "\037\000"- 00:07:18.068 [2024-11-27 15:06:43.370986] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.068 [2024-11-27 15:06:43.371015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.068 #33 NEW cov: 12533 ft: 15683 corp: 25/408b lim: 35 exec/s: 33 rss: 75Mb L: 13/28 MS: 1 PersAutoDict- DE: "\037\000"- 00:07:18.328 [2024-11-27 15:06:43.411128] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.328 [2024-11-27 15:06:43.411156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.328 #34 NEW cov: 12533 ft: 15715 corp: 26/421b lim: 35 exec/s: 34 rss: 75Mb L: 13/28 MS: 1 ShuffleBytes- 00:07:18.328 [2024-11-27 15:06:43.471289] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.328 [2024-11-27 15:06:43.471317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.328 #35 NEW cov: 12533 ft: 15719 corp: 27/429b lim: 35 exec/s: 35 rss: 75Mb L: 8/28 MS: 1 CrossOver- 00:07:18.328 [2024-11-27 15:06:43.531591] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.328 [2024-11-27 15:06:43.531623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.328 [2024-11-27 15:06:43.531697] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.328 [2024-11-27 15:06:43.531714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.328 #36 NEW cov: 12533 ft: 15752 corp: 28/447b lim: 35 exec/s: 36 rss: 75Mb L: 18/28 MS: 1 ChangeASCIIInt- 00:07:18.328 [2024-11-27 15:06:43.571760] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.328 [2024-11-27 15:06:43.571787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.328 [2024-11-27 15:06:43.571856] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.328 [2024-11-27 15:06:43.571872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.328 #37 NEW cov: 12533 ft: 15826 corp: 29/461b lim: 35 exec/s: 37 rss: 75Mb L: 14/28 MS: 1 InsertByte- 00:07:18.328 [2024-11-27 15:06:43.611850] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.328 [2024-11-27 15:06:43.611877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.328 [2024-11-27 15:06:43.611938] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.328 [2024-11-27 15:06:43.611954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.328 #38 NEW cov: 12533 ft: 15841 corp: 30/479b lim: 35 exec/s: 38 rss: 75Mb L: 18/28 MS: 1 ChangeByte- 00:07:18.588 [2024-11-27 15:06:43.672337] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.588 [2024-11-27 15:06:43.672363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.588 [2024-11-27 15:06:43.672422] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000076 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.588 [2024-11-27 15:06:43.672436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.588 [2024-11-27 15:06:43.672494] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000076 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.588 [2024-11-27 15:06:43.672508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.588 [2024-11-27 15:06:43.672567] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.588 [2024-11-27 15:06:43.672583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.588 #39 NEW cov: 12533 ft: 15855 corp: 31/511b lim: 35 exec/s: 39 rss: 75Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:07:18.588 [2024-11-27 15:06:43.732186] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.588 [2024-11-27 15:06:43.732213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.588 [2024-11-27 15:06:43.732271] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.588 [2024-11-27 15:06:43.732285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.588 #40 NEW cov: 12533 ft: 15882 corp: 32/531b lim: 35 exec/s: 40 rss: 75Mb L: 20/32 MS: 1 PersAutoDict- DE: " \000\000\000"- 00:07:18.588 [2024-11-27 15:06:43.792706] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.588 [2024-11-27 15:06:43.792733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.588 [2024-11-27 15:06:43.792811] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000076 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.588 [2024-11-27 15:06:43.792825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.588 [2024-11-27 15:06:43.792889] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000076 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.588 [2024-11-27 15:06:43.792906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.588 [2024-11-27 15:06:43.792966] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.588 [2024-11-27 15:06:43.792982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.588 #46 NEW cov: 12533 ft: 15939 corp: 33/563b lim: 35 exec/s: 46 rss: 76Mb L: 32/32 MS: 1 PersAutoDict- DE: " \000\000\000"- 00:07:18.588 [2024-11-27 15:06:43.852508] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.588 [2024-11-27 15:06:43.852533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.588 [2024-11-27 15:06:43.852593] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.588 [2024-11-27 15:06:43.852612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.588 #47 NEW cov: 12533 ft: 15954 corp: 34/581b lim: 35 exec/s: 23 rss: 76Mb L: 18/32 MS: 1 CopyPart- 00:07:18.588 #47 DONE cov: 12533 ft: 15954 corp: 34/581b lim: 35 exec/s: 23 rss: 76Mb 00:07:18.588 ###### Recommended dictionary. ###### 00:07:18.588 "i4\242\277\242u\222\000" # Uses: 1 00:07:18.588 " \000\000\000" # Uses: 2 00:07:18.588 "\037\000" # Uses: 2 00:07:18.588 ###### End of recommended dictionary. ###### 00:07:18.588 Done 47 runs in 2 second(s) 00:07:18.847 15:06:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:18.847 15:06:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:18.847 15:06:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:18.847 15:06:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:18.847 15:06:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:18.847 15:06:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:18.847 15:06:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:18.847 15:06:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:18.847 15:06:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:18.847 15:06:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:18.847 15:06:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:18.847 15:06:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:18.847 15:06:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:18.847 15:06:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:18.847 15:06:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:18.847 15:06:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:18.847 15:06:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:18.847 15:06:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:18.847 15:06:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:18.847 [2024-11-27 15:06:44.005034] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:18.847 [2024-11-27 15:06:44.005089] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2371943 ] 00:07:19.106 [2024-11-27 15:06:44.191315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.106 [2024-11-27 15:06:44.225091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.106 [2024-11-27 15:06:44.285624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.106 [2024-11-27 15:06:44.302001] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:19.106 INFO: Running with entropic power schedule (0xFF, 100). 00:07:19.106 INFO: Seed: 2951769758 00:07:19.106 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:07:19.106 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:07:19.106 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:19.106 INFO: A corpus is not provided, starting from an empty corpus 00:07:19.106 #2 INITED exec/s: 0 rss: 65Mb 00:07:19.106 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:19.106 This may also happen if the target rejected all inputs we tried so far 00:07:19.365 NEW_FUNC[1/703]: 0x451248 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:19.365 NEW_FUNC[2/703]: 0x46ba68 in feat_power_management /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:282 00:07:19.365 #7 NEW cov: 12145 ft: 12106 corp: 2/11b lim: 35 exec/s: 0 rss: 73Mb L: 10/10 MS: 5 ChangeBit-ChangeByte-CopyPart-ChangeBinInt-CMP- DE: "\007\000\000\000\000\000\000\000"- 00:07:19.624 #8 NEW cov: 12258 ft: 12638 corp: 3/21b lim: 35 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 ChangeByte- 00:07:19.624 [2024-11-27 15:06:44.728460] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.624 [2024-11-27 15:06:44.728497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.624 NEW_FUNC[1/14]: 0x194db18 in spdk_nvme_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:263 00:07:19.624 NEW_FUNC[2/14]: 0x194dd58 in nvme_admin_qpair_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:202 00:07:19.624 #9 NEW cov: 12396 ft: 13327 corp: 4/39b lim: 35 exec/s: 0 rss: 73Mb L: 18/18 MS: 1 PersAutoDict- DE: "\007\000\000\000\000\000\000\000"- 00:07:19.624 [2024-11-27 15:06:44.768714] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.624 [2024-11-27 15:06:44.768741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.624 NEW_FUNC[1/1]: 0x46ed08 in feat_number_of_queues /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:318 00:07:19.624 #10 NEW cov: 12513 ft: 13860 corp: 5/63b lim: 35 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:07:19.624 [2024-11-27 15:06:44.828619] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:4 cdw10:0000050d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.624 [2024-11-27 15:06:44.828645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.624 [2024-11-27 15:06:44.828708] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.624 [2024-11-27 15:06:44.828723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.624 #11 NEW cov: 12513 ft: 14130 corp: 6/81b lim: 35 exec/s: 0 rss: 74Mb L: 18/24 MS: 1 CMP- DE: "\015\250\0074\244u\222\000"- 00:07:19.624 [2024-11-27 15:06:44.869006] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.624 [2024-11-27 15:06:44.869031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.624 #12 NEW cov: 12513 ft: 14201 corp: 7/105b lim: 35 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 ShuffleBytes- 00:07:19.624 #13 NEW cov: 12513 ft: 14361 corp: 8/115b lim: 35 exec/s: 0 rss: 74Mb L: 10/24 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:19.883 [2024-11-27 15:06:44.969417] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.883 [2024-11-27 15:06:44.969444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.883 [2024-11-27 15:06:44.969506] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.883 [2024-11-27 15:06:44.969519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.884 [2024-11-27 15:06:44.969579] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.884 [2024-11-27 15:06:44.969593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.884 #14 NEW cov: 12513 ft: 14820 corp: 9/147b lim: 35 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:07:19.884 [2024-11-27 15:06:45.009519] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.884 [2024-11-27 15:06:45.009544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.884 [2024-11-27 15:06:45.009610] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.884 [2024-11-27 15:06:45.009640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.884 [2024-11-27 15:06:45.009703] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.884 [2024-11-27 15:06:45.009717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.884 #15 NEW cov: 12513 ft: 14881 corp: 10/179b lim: 35 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:19.884 NEW_FUNC[1/1]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:19.884 #16 NEW cov: 12527 ft: 14942 corp: 11/188b lim: 35 exec/s: 0 rss: 74Mb L: 9/32 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:19.884 #17 NEW cov: 12527 ft: 14992 corp: 12/198b lim: 35 exec/s: 0 rss: 74Mb L: 10/32 MS: 1 CopyPart- 00:07:19.884 #18 NEW cov: 12527 ft: 15042 corp: 13/207b lim: 35 exec/s: 0 rss: 74Mb L: 9/32 MS: 1 ShuffleBytes- 00:07:20.173 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:20.174 #19 NEW cov: 12550 ft: 15108 corp: 14/217b lim: 35 exec/s: 0 rss: 74Mb L: 10/32 MS: 1 CMP- DE: "\377\221u\244o)\304@"- 00:07:20.174 [2024-11-27 15:06:45.269964] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000629 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.174 [2024-11-27 15:06:45.269990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.174 #20 NEW cov: 12550 ft: 15168 corp: 15/235b lim: 35 exec/s: 0 rss: 74Mb L: 18/32 MS: 1 PersAutoDict- DE: "\377\221u\244o)\304@"- 00:07:20.174 [2024-11-27 15:06:45.330451] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.174 [2024-11-27 15:06:45.330476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.174 [2024-11-27 15:06:45.330543] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.174 [2024-11-27 15:06:45.330557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.174 [2024-11-27 15:06:45.330620] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.174 [2024-11-27 15:06:45.330633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.174 #21 NEW cov: 12550 ft: 15176 corp: 16/268b lim: 35 exec/s: 21 rss: 74Mb L: 33/33 MS: 1 CopyPart- 00:07:20.174 [2024-11-27 15:06:45.370256] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.174 [2024-11-27 15:06:45.370281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.174 #22 NEW cov: 12550 ft: 15188 corp: 17/285b lim: 35 exec/s: 22 rss: 74Mb L: 17/33 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:20.174 #23 NEW cov: 12550 ft: 15197 corp: 18/295b lim: 35 exec/s: 23 rss: 74Mb L: 10/33 MS: 1 ChangeBit- 00:07:20.174 [2024-11-27 15:06:45.470801] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.174 [2024-11-27 15:06:45.470827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.174 [2024-11-27 15:06:45.470905] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.174 [2024-11-27 15:06:45.470919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.174 [2024-11-27 15:06:45.470979] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.174 [2024-11-27 15:06:45.470993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.433 #24 NEW cov: 12550 ft: 15228 corp: 19/329b lim: 35 exec/s: 24 rss: 74Mb L: 34/34 MS: 1 InsertByte- 00:07:20.433 #27 NEW cov: 12550 ft: 15270 corp: 20/341b lim: 35 exec/s: 27 rss: 74Mb L: 12/34 MS: 3 EraseBytes-ShuffleBytes-CrossOver- 00:07:20.433 [2024-11-27 15:06:45.570806] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.433 [2024-11-27 15:06:45.570831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.433 #28 NEW cov: 12550 ft: 15292 corp: 21/359b lim: 35 exec/s: 28 rss: 75Mb L: 18/34 MS: 1 InsertByte- 00:07:20.433 #29 NEW cov: 12550 ft: 15347 corp: 22/369b lim: 35 exec/s: 29 rss: 75Mb L: 10/34 MS: 1 ChangeBinInt- 00:07:20.433 #30 NEW cov: 12550 ft: 15351 corp: 23/382b lim: 35 exec/s: 30 rss: 75Mb L: 13/34 MS: 1 InsertByte- 00:07:20.433 [2024-11-27 15:06:45.751590] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.433 [2024-11-27 15:06:45.751619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.433 [2024-11-27 15:06:45.751699] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.433 [2024-11-27 15:06:45.751714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.433 [2024-11-27 15:06:45.751790] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.433 [2024-11-27 15:06:45.751803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.692 #31 NEW cov: 12550 ft: 15366 corp: 24/415b lim: 35 exec/s: 31 rss: 75Mb L: 33/34 MS: 1 PersAutoDict- DE: "\007\000\000\000\000\000\000\000"- 00:07:20.692 [2024-11-27 15:06:45.791381] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.692 [2024-11-27 15:06:45.791407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.692 #32 NEW cov: 12550 ft: 15379 corp: 25/430b lim: 35 exec/s: 32 rss: 75Mb L: 15/34 MS: 1 EraseBytes- 00:07:20.692 #33 NEW cov: 12550 ft: 15388 corp: 26/440b lim: 35 exec/s: 33 rss: 75Mb L: 10/34 MS: 1 CopyPart- 00:07:20.692 [2024-11-27 15:06:45.871595] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.692 [2024-11-27 15:06:45.871627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.692 [2024-11-27 15:06:45.871705] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.692 [2024-11-27 15:06:45.871720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.692 #34 NEW cov: 12550 ft: 15439 corp: 27/458b lim: 35 exec/s: 34 rss: 75Mb L: 18/34 MS: 1 ShuffleBytes- 00:07:20.692 [2024-11-27 15:06:45.932164] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.692 [2024-11-27 15:06:45.932189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.692 [2024-11-27 15:06:45.932270] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.692 [2024-11-27 15:06:45.932285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.692 [2024-11-27 15:06:45.932348] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.692 [2024-11-27 15:06:45.932361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.692 #35 NEW cov: 12550 ft: 15522 corp: 28/490b lim: 35 exec/s: 35 rss: 75Mb L: 32/34 MS: 1 ShuffleBytes- 00:07:20.692 [2024-11-27 15:06:45.972074] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.692 [2024-11-27 15:06:45.972100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.692 [2024-11-27 15:06:45.972163] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.692 [2024-11-27 15:06:45.972178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.692 #36 NEW cov: 12550 ft: 15535 corp: 29/511b lim: 35 exec/s: 36 rss: 75Mb L: 21/34 MS: 1 CopyPart- 00:07:20.951 [2024-11-27 15:06:46.032266] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.951 [2024-11-27 15:06:46.032292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.951 [2024-11-27 15:06:46.032354] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.951 [2024-11-27 15:06:46.032368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.951 #37 NEW cov: 12550 ft: 15579 corp: 30/534b lim: 35 exec/s: 37 rss: 75Mb L: 23/34 MS: 1 PersAutoDict- DE: "\007\000\000\000\000\000\000\000"- 00:07:20.951 #38 NEW cov: 12550 ft: 15631 corp: 31/544b lim: 35 exec/s: 38 rss: 75Mb L: 10/34 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:20.951 [2024-11-27 15:06:46.132357] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.951 [2024-11-27 15:06:46.132383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.951 NEW_FUNC[1/2]: 0x4705d8 in feat_interrupt_vector_configuration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:332 00:07:20.951 NEW_FUNC[2/2]: 0x137d8c8 in nvmf_ctrlr_get_features_interrupt_vector_configuration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1709 00:07:20.951 #42 NEW cov: 12599 ft: 15691 corp: 32/558b lim: 35 exec/s: 42 rss: 75Mb L: 14/34 MS: 4 CMP-ChangeBinInt-InsertByte-CrossOver- DE: "\001\004"- 00:07:20.951 [2024-11-27 15:06:46.172550] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.951 [2024-11-27 15:06:46.172576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.951 #43 NEW cov: 12599 ft: 15693 corp: 33/573b lim: 35 exec/s: 43 rss: 75Mb L: 15/34 MS: 1 EraseBytes- 00:07:20.951 #45 NEW cov: 12599 ft: 15701 corp: 34/585b lim: 35 exec/s: 45 rss: 75Mb L: 12/34 MS: 2 ShuffleBytes-CopyPart- 00:07:21.210 [2024-11-27 15:06:46.292689] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000019f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-11-27 15:06:46.292715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.210 #49 NEW cov: 12599 ft: 15710 corp: 35/595b lim: 35 exec/s: 49 rss: 75Mb L: 10/34 MS: 4 ChangeByte-CMP-InsertByte-CMP- DE: "\001\000\000\000"-"\377\377\377\365"- 00:07:21.210 [2024-11-27 15:06:46.333006] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-11-27 15:06:46.333031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.210 #50 NEW cov: 12599 ft: 15721 corp: 36/610b lim: 35 exec/s: 25 rss: 75Mb L: 15/34 MS: 1 ChangeByte- 00:07:21.210 #50 DONE cov: 12599 ft: 15721 corp: 36/610b lim: 35 exec/s: 25 rss: 75Mb 00:07:21.210 ###### Recommended dictionary. ###### 00:07:21.210 "\007\000\000\000\000\000\000\000" # Uses: 3 00:07:21.210 "\015\250\0074\244u\222\000" # Uses: 0 00:07:21.210 "\000\000\000\000\000\000\000\000" # Uses: 3 00:07:21.210 "\377\221u\244o)\304@" # Uses: 1 00:07:21.210 "\001\004" # Uses: 0 00:07:21.210 "\001\000\000\000" # Uses: 0 00:07:21.210 "\377\377\377\365" # Uses: 0 00:07:21.210 ###### End of recommended dictionary. ###### 00:07:21.210 Done 50 runs in 2 second(s) 00:07:21.210 15:06:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:21.210 15:06:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:21.210 15:06:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:21.210 15:06:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:21.210 15:06:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:21.210 15:06:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:21.210 15:06:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:21.210 15:06:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:21.210 15:06:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:21.210 15:06:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:21.210 15:06:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:21.210 15:06:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:21.210 15:06:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:21.210 15:06:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:21.210 15:06:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:21.210 15:06:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:21.210 15:06:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:21.210 15:06:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:21.210 15:06:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:21.210 [2024-11-27 15:06:46.506391] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:21.210 [2024-11-27 15:06:46.506463] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2372380 ] 00:07:21.468 [2024-11-27 15:06:46.694945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.468 [2024-11-27 15:06:46.728452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.468 [2024-11-27 15:06:46.787923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.468 [2024-11-27 15:06:46.804281] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:21.725 INFO: Running with entropic power schedule (0xFF, 100). 00:07:21.725 INFO: Seed: 1157798722 00:07:21.725 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:07:21.725 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:07:21.725 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:21.725 INFO: A corpus is not provided, starting from an empty corpus 00:07:21.725 #2 INITED exec/s: 0 rss: 65Mb 00:07:21.725 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:21.725 This may also happen if the target rejected all inputs we tried so far 00:07:21.725 [2024-11-27 15:06:46.852252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381209916408583 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.725 [2024-11-27 15:06:46.852283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.725 [2024-11-27 15:06:46.852353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.725 [2024-11-27 15:06:46.852370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.983 NEW_FUNC[1/717]: 0x452708 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:21.983 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:21.983 #7 NEW cov: 12355 ft: 12354 corp: 2/60b lim: 105 exec/s: 0 rss: 73Mb L: 59/59 MS: 5 CopyPart-InsertByte-InsertByte-ChangeBinInt-InsertRepeatedBytes- 00:07:21.984 [2024-11-27 15:06:47.173043] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381209916408583 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.984 [2024-11-27 15:06:47.173078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.984 [2024-11-27 15:06:47.173134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.984 [2024-11-27 15:06:47.173153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.984 #8 NEW cov: 12471 ft: 13028 corp: 3/119b lim: 105 exec/s: 0 rss: 73Mb L: 59/59 MS: 1 ChangeByte- 00:07:21.984 [2024-11-27 15:06:47.233039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381210470516487 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.984 [2024-11-27 15:06:47.233066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.984 #13 NEW cov: 12477 ft: 13700 corp: 4/159b lim: 105 exec/s: 0 rss: 73Mb L: 40/59 MS: 5 CrossOver-ChangeByte-CopyPart-ChangeByte-CrossOver- 00:07:21.984 [2024-11-27 15:06:47.273110] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381210470516487 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.984 [2024-11-27 15:06:47.273138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.984 #14 NEW cov: 12562 ft: 13914 corp: 5/189b lim: 105 exec/s: 0 rss: 73Mb L: 30/59 MS: 1 EraseBytes- 00:07:22.242 [2024-11-27 15:06:47.333659] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381209916408583 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.242 [2024-11-27 15:06:47.333686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.242 [2024-11-27 15:06:47.333764] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.242 [2024-11-27 15:06:47.333781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.242 [2024-11-27 15:06:47.333837] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.242 [2024-11-27 15:06:47.333853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.242 [2024-11-27 15:06:47.333907] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.242 [2024-11-27 15:06:47.333923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.242 #15 NEW cov: 12562 ft: 14564 corp: 6/277b lim: 105 exec/s: 0 rss: 73Mb L: 88/88 MS: 1 CrossOver- 00:07:22.242 [2024-11-27 15:06:47.373411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381210470516487 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.242 [2024-11-27 15:06:47.373437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.242 #16 NEW cov: 12562 ft: 14663 corp: 7/307b lim: 105 exec/s: 0 rss: 73Mb L: 30/88 MS: 1 ChangeByte- 00:07:22.242 [2024-11-27 15:06:47.433570] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381210470516487 len:1835 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.242 [2024-11-27 15:06:47.433604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.242 #17 NEW cov: 12562 ft: 14757 corp: 8/338b lim: 105 exec/s: 0 rss: 73Mb L: 31/88 MS: 1 InsertByte- 00:07:22.243 [2024-11-27 15:06:47.494122] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381209916408583 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.243 [2024-11-27 15:06:47.494150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.243 [2024-11-27 15:06:47.494201] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:506381209866536710 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.243 [2024-11-27 15:06:47.494219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.243 [2024-11-27 15:06:47.494273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.243 [2024-11-27 15:06:47.494289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.243 [2024-11-27 15:06:47.494344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.243 [2024-11-27 15:06:47.494359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.243 #18 NEW cov: 12562 ft: 14821 corp: 9/426b lim: 105 exec/s: 0 rss: 74Mb L: 88/88 MS: 1 ChangeBit- 00:07:22.243 [2024-11-27 15:06:47.553908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381210959415047 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.243 [2024-11-27 15:06:47.553936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.243 #22 NEW cov: 12562 ft: 14918 corp: 10/451b lim: 105 exec/s: 0 rss: 74Mb L: 25/88 MS: 4 ChangeBit-ShuffleBytes-ChangeBit-CrossOver- 00:07:22.502 [2024-11-27 15:06:47.594023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381210424453898 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.502 [2024-11-27 15:06:47.594049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.502 #25 NEW cov: 12562 ft: 14985 corp: 11/478b lim: 105 exec/s: 0 rss: 74Mb L: 27/88 MS: 3 CopyPart-InsertByte-CrossOver- 00:07:22.502 [2024-11-27 15:06:47.634468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381209916408583 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.502 [2024-11-27 15:06:47.634495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.502 [2024-11-27 15:06:47.634567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.502 [2024-11-27 15:06:47.634584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.502 [2024-11-27 15:06:47.634643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.502 [2024-11-27 15:06:47.634659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.502 [2024-11-27 15:06:47.634714] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.502 [2024-11-27 15:06:47.634728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.502 #26 NEW cov: 12562 ft: 15015 corp: 12/567b lim: 105 exec/s: 0 rss: 74Mb L: 89/89 MS: 1 CrossOver- 00:07:22.502 [2024-11-27 15:06:47.674234] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381210470516487 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.502 [2024-11-27 15:06:47.674263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.502 #27 NEW cov: 12562 ft: 15068 corp: 13/597b lim: 105 exec/s: 0 rss: 74Mb L: 30/89 MS: 1 CrossOver- 00:07:22.502 [2024-11-27 15:06:47.714737] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.502 [2024-11-27 15:06:47.714765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.502 [2024-11-27 15:06:47.714812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.502 [2024-11-27 15:06:47.714827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.502 [2024-11-27 15:06:47.714882] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.502 [2024-11-27 15:06:47.714898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.502 [2024-11-27 15:06:47.714954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.502 [2024-11-27 15:06:47.714968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.502 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:22.502 #30 NEW cov: 12585 ft: 15118 corp: 14/692b lim: 105 exec/s: 0 rss: 74Mb L: 95/95 MS: 3 ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:22.502 [2024-11-27 15:06:47.754876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381209916408583 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.502 [2024-11-27 15:06:47.754904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.502 [2024-11-27 15:06:47.754976] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.502 [2024-11-27 15:06:47.754992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.502 [2024-11-27 15:06:47.755047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4991471924787103045 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.502 [2024-11-27 15:06:47.755061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.502 [2024-11-27 15:06:47.755116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.502 [2024-11-27 15:06:47.755132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.502 #31 NEW cov: 12585 ft: 15174 corp: 15/796b lim: 105 exec/s: 0 rss: 74Mb L: 104/104 MS: 1 InsertRepeatedBytes- 00:07:22.502 [2024-11-27 15:06:47.795027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381209916408583 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.502 [2024-11-27 15:06:47.795054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.502 [2024-11-27 15:06:47.795121] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.502 [2024-11-27 15:06:47.795137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.502 [2024-11-27 15:06:47.795192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.502 [2024-11-27 15:06:47.795207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.502 [2024-11-27 15:06:47.795264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.502 [2024-11-27 15:06:47.795283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.502 #32 NEW cov: 12585 ft: 15224 corp: 16/885b lim: 105 exec/s: 32 rss: 74Mb L: 89/104 MS: 1 ShuffleBytes- 00:07:22.762 [2024-11-27 15:06:47.854941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381210470516487 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.762 [2024-11-27 15:06:47.854969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.762 [2024-11-27 15:06:47.855007] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:852151694198538 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.762 [2024-11-27 15:06:47.855023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.762 #33 NEW cov: 12585 ft: 15313 corp: 17/928b lim: 105 exec/s: 33 rss: 74Mb L: 43/104 MS: 1 CrossOver- 00:07:22.762 [2024-11-27 15:06:47.894997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381209916408583 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.762 [2024-11-27 15:06:47.895024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.762 [2024-11-27 15:06:47.895064] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:506381210470516487 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.762 [2024-11-27 15:06:47.895080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.762 #34 NEW cov: 12585 ft: 15391 corp: 18/987b lim: 105 exec/s: 34 rss: 74Mb L: 59/104 MS: 1 CrossOver- 00:07:22.762 [2024-11-27 15:06:47.935007] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:536899719067535111 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.762 [2024-11-27 15:06:47.935035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.762 #35 NEW cov: 12585 ft: 15425 corp: 19/1022b lim: 105 exec/s: 35 rss: 74Mb L: 35/104 MS: 1 InsertRepeatedBytes- 00:07:22.762 [2024-11-27 15:06:47.975392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381210472875783 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.762 [2024-11-27 15:06:47.975419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.762 [2024-11-27 15:06:47.975465] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.762 [2024-11-27 15:06:47.975481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.762 [2024-11-27 15:06:47.975536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.762 [2024-11-27 15:06:47.975550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.762 #36 NEW cov: 12585 ft: 15708 corp: 20/1102b lim: 105 exec/s: 36 rss: 74Mb L: 80/104 MS: 1 CopyPart- 00:07:22.762 [2024-11-27 15:06:48.015596] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381209916408583 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.762 [2024-11-27 15:06:48.015629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.762 [2024-11-27 15:06:48.015700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.762 [2024-11-27 15:06:48.015717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.762 [2024-11-27 15:06:48.015786] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:506398802052581127 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.762 [2024-11-27 15:06:48.015803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.762 [2024-11-27 15:06:48.015858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.762 [2024-11-27 15:06:48.015872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.762 #37 NEW cov: 12585 ft: 15730 corp: 21/1191b lim: 105 exec/s: 37 rss: 74Mb L: 89/104 MS: 1 ChangeBit- 00:07:22.762 [2024-11-27 15:06:48.055374] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381210470516487 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.762 [2024-11-27 15:06:48.055401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.762 #38 NEW cov: 12585 ft: 15740 corp: 22/1221b lim: 105 exec/s: 38 rss: 74Mb L: 30/104 MS: 1 ShuffleBytes- 00:07:22.762 [2024-11-27 15:06:48.096048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381209916408583 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.762 [2024-11-27 15:06:48.096076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.762 [2024-11-27 15:06:48.096133] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.762 [2024-11-27 15:06:48.096149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.762 [2024-11-27 15:06:48.096207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4991471924787103045 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.762 [2024-11-27 15:06:48.096222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.762 [2024-11-27 15:06:48.096280] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.762 [2024-11-27 15:06:48.096296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.762 [2024-11-27 15:06:48.096353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.763 [2024-11-27 15:06:48.096369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:23.021 #39 NEW cov: 12585 ft: 15845 corp: 23/1326b lim: 105 exec/s: 39 rss: 74Mb L: 105/105 MS: 1 CopyPart- 00:07:23.021 [2024-11-27 15:06:48.155709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506646128348301063 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.021 [2024-11-27 15:06:48.155737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.022 #40 NEW cov: 12585 ft: 15932 corp: 24/1356b lim: 105 exec/s: 40 rss: 74Mb L: 30/105 MS: 1 ChangeBinInt- 00:07:23.022 [2024-11-27 15:06:48.216119] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381210472875783 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.022 [2024-11-27 15:06:48.216146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.022 [2024-11-27 15:06:48.216186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.022 [2024-11-27 15:06:48.216205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.022 [2024-11-27 15:06:48.216259] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.022 [2024-11-27 15:06:48.216276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.022 #46 NEW cov: 12585 ft: 15945 corp: 25/1436b lim: 105 exec/s: 46 rss: 74Mb L: 80/105 MS: 1 CMP- DE: "\377\377\377\005"- 00:07:23.022 [2024-11-27 15:06:48.276042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381210470516487 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.022 [2024-11-27 15:06:48.276069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.022 #47 NEW cov: 12585 ft: 15953 corp: 26/1466b lim: 105 exec/s: 47 rss: 74Mb L: 30/105 MS: 1 ShuffleBytes- 00:07:23.022 [2024-11-27 15:06:48.316096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:536899719067535111 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.022 [2024-11-27 15:06:48.316124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.022 #48 NEW cov: 12585 ft: 15960 corp: 27/1501b lim: 105 exec/s: 48 rss: 74Mb L: 35/105 MS: 1 ChangeByte- 00:07:23.280 [2024-11-27 15:06:48.376305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381210470516487 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.280 [2024-11-27 15:06:48.376332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.281 #49 NEW cov: 12585 ft: 15965 corp: 28/1531b lim: 105 exec/s: 49 rss: 74Mb L: 30/105 MS: 1 CrossOver- 00:07:23.281 [2024-11-27 15:06:48.436718] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381210959415047 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.281 [2024-11-27 15:06:48.436746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.281 [2024-11-27 15:06:48.436800] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.281 [2024-11-27 15:06:48.436815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.281 [2024-11-27 15:06:48.436874] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.281 [2024-11-27 15:06:48.436890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.281 #50 NEW cov: 12585 ft: 15971 corp: 29/1612b lim: 105 exec/s: 50 rss: 74Mb L: 81/105 MS: 1 InsertRepeatedBytes- 00:07:23.281 [2024-11-27 15:06:48.497017] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381209916408583 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.281 [2024-11-27 15:06:48.497045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.281 [2024-11-27 15:06:48.497117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.281 [2024-11-27 15:06:48.497131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.281 [2024-11-27 15:06:48.497187] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.281 [2024-11-27 15:06:48.497201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.281 [2024-11-27 15:06:48.497263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.281 [2024-11-27 15:06:48.497280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.281 #51 NEW cov: 12585 ft: 15995 corp: 30/1714b lim: 105 exec/s: 51 rss: 75Mb L: 102/105 MS: 1 CopyPart- 00:07:23.281 [2024-11-27 15:06:48.556758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446468127077631999 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.281 [2024-11-27 15:06:48.556784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.281 #52 NEW cov: 12585 ft: 16020 corp: 31/1744b lim: 105 exec/s: 52 rss: 75Mb L: 30/105 MS: 1 PersAutoDict- DE: "\377\377\377\005"- 00:07:23.281 [2024-11-27 15:06:48.597002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381209916408583 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.281 [2024-11-27 15:06:48.597028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.281 [2024-11-27 15:06:48.597085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:506382090334832391 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.281 [2024-11-27 15:06:48.597101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.540 #53 NEW cov: 12585 ft: 16027 corp: 32/1803b lim: 105 exec/s: 53 rss: 75Mb L: 59/105 MS: 1 ChangeByte- 00:07:23.540 [2024-11-27 15:06:48.657102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381210470516487 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.540 [2024-11-27 15:06:48.657129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.541 #54 NEW cov: 12585 ft: 16059 corp: 33/1833b lim: 105 exec/s: 54 rss: 75Mb L: 30/105 MS: 1 ShuffleBytes- 00:07:23.541 [2024-11-27 15:06:48.697589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381209916408583 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.541 [2024-11-27 15:06:48.697623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.541 [2024-11-27 15:06:48.697680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.541 [2024-11-27 15:06:48.697695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.541 [2024-11-27 15:06:48.697750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.541 [2024-11-27 15:06:48.697766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.541 [2024-11-27 15:06:48.697821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.541 [2024-11-27 15:06:48.697838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.541 #55 NEW cov: 12585 ft: 16064 corp: 34/1929b lim: 105 exec/s: 55 rss: 75Mb L: 96/105 MS: 1 CopyPart- 00:07:23.541 [2024-11-27 15:06:48.737405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:536899719067535111 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.541 [2024-11-27 15:06:48.737432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.541 [2024-11-27 15:06:48.737474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.541 [2024-11-27 15:06:48.737490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.541 #56 NEW cov: 12585 ft: 16076 corp: 35/1971b lim: 105 exec/s: 56 rss: 75Mb L: 42/105 MS: 1 CopyPart- 00:07:23.541 [2024-11-27 15:06:48.777817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381209916408583 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.541 [2024-11-27 15:06:48.777843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.541 [2024-11-27 15:06:48.777913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.541 [2024-11-27 15:06:48.777929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.541 [2024-11-27 15:06:48.777981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:506381209866536711 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.541 [2024-11-27 15:06:48.777997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.541 [2024-11-27 15:06:48.778051] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:506381209866536765 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.541 [2024-11-27 15:06:48.778066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.541 #57 NEW cov: 12585 ft: 16093 corp: 36/2068b lim: 105 exec/s: 57 rss: 75Mb L: 97/105 MS: 1 InsertByte- 00:07:23.541 [2024-11-27 15:06:48.837613] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:506381210470516487 len:1800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.541 [2024-11-27 15:06:48.837640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.541 #58 NEW cov: 12585 ft: 16125 corp: 37/2109b lim: 105 exec/s: 29 rss: 75Mb L: 41/105 MS: 1 InsertByte- 00:07:23.541 #58 DONE cov: 12585 ft: 16125 corp: 37/2109b lim: 105 exec/s: 29 rss: 75Mb 00:07:23.541 ###### Recommended dictionary. ###### 00:07:23.541 "\377\377\377\005" # Uses: 1 00:07:23.541 ###### End of recommended dictionary. ###### 00:07:23.541 Done 58 runs in 2 second(s) 00:07:23.800 15:06:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:23.800 15:06:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:23.800 15:06:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:23.800 15:06:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:23.800 15:06:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:23.800 15:06:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:23.800 15:06:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:23.800 15:06:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:23.800 15:06:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:23.800 15:06:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:23.800 15:06:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:23.800 15:06:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:23.800 15:06:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:23.800 15:06:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:23.800 15:06:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:23.800 15:06:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:23.800 15:06:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:23.800 15:06:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:23.800 15:06:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:07:23.800 [2024-11-27 15:06:48.996830] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:23.800 [2024-11-27 15:06:48.996893] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2372758 ] 00:07:24.064 [2024-11-27 15:06:49.182913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.064 [2024-11-27 15:06:49.217104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.064 [2024-11-27 15:06:49.276167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.064 [2024-11-27 15:06:49.292532] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:24.064 INFO: Running with entropic power schedule (0xFF, 100). 00:07:24.064 INFO: Seed: 3646809795 00:07:24.064 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:07:24.064 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:07:24.064 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:24.064 INFO: A corpus is not provided, starting from an empty corpus 00:07:24.064 #2 INITED exec/s: 0 rss: 65Mb 00:07:24.064 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:24.064 This may also happen if the target rejected all inputs we tried so far 00:07:24.064 [2024-11-27 15:06:49.358604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.064 [2024-11-27 15:06:49.358646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.064 [2024-11-27 15:06:49.358772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.064 [2024-11-27 15:06:49.358795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.583 NEW_FUNC[1/717]: 0x455a88 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:24.583 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:24.583 #31 NEW cov: 12375 ft: 12376 corp: 2/53b lim: 120 exec/s: 0 rss: 73Mb L: 52/52 MS: 4 ChangeBinInt-ChangeBinInt-InsertByte-InsertRepeatedBytes- 00:07:24.583 [2024-11-27 15:06:49.679066] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12442509726035258540 len:44205 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.583 [2024-11-27 15:06:49.679104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.583 [2024-11-27 15:06:49.679151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:12442509728149187756 len:44205 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.583 [2024-11-27 15:06:49.679168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.583 [2024-11-27 15:06:49.679225] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:12442509728149187756 len:44205 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.583 [2024-11-27 15:06:49.679241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.583 [2024-11-27 15:06:49.679295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:12442509728149187756 len:44205 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.583 [2024-11-27 15:06:49.679310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.583 NEW_FUNC[1/1]: 0x19fe3b8 in nvme_tcp_qpair /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_tcp.c:183 00:07:24.583 #39 NEW cov: 12492 ft: 13547 corp: 3/163b lim: 120 exec/s: 0 rss: 73Mb L: 110/110 MS: 3 ChangeBit-ChangeBit-InsertRepeatedBytes- 00:07:24.583 [2024-11-27 15:06:49.728833] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.583 [2024-11-27 15:06:49.728861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.583 [2024-11-27 15:06:49.728914] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.583 [2024-11-27 15:06:49.728930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.583 #40 NEW cov: 12498 ft: 13761 corp: 4/215b lim: 120 exec/s: 0 rss: 73Mb L: 52/110 MS: 1 ShuffleBytes- 00:07:24.583 [2024-11-27 15:06:49.789000] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.583 [2024-11-27 15:06:49.789029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.583 [2024-11-27 15:06:49.789081] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.583 [2024-11-27 15:06:49.789096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.583 #46 NEW cov: 12583 ft: 14088 corp: 5/267b lim: 120 exec/s: 0 rss: 73Mb L: 52/110 MS: 1 ShuffleBytes- 00:07:24.583 [2024-11-27 15:06:49.829099] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.583 [2024-11-27 15:06:49.829127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.583 [2024-11-27 15:06:49.829190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.583 [2024-11-27 15:06:49.829207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.583 #47 NEW cov: 12583 ft: 14254 corp: 6/315b lim: 120 exec/s: 0 rss: 73Mb L: 48/110 MS: 1 EraseBytes- 00:07:24.583 [2024-11-27 15:06:49.889242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.583 [2024-11-27 15:06:49.889269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.583 [2024-11-27 15:06:49.889310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.583 [2024-11-27 15:06:49.889326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.583 #53 NEW cov: 12583 ft: 14330 corp: 7/367b lim: 120 exec/s: 0 rss: 73Mb L: 52/110 MS: 1 ChangeByte- 00:07:24.842 [2024-11-27 15:06:49.929394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.842 [2024-11-27 15:06:49.929420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.842 [2024-11-27 15:06:49.929469] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.842 [2024-11-27 15:06:49.929486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.842 #59 NEW cov: 12583 ft: 14519 corp: 8/420b lim: 120 exec/s: 0 rss: 73Mb L: 53/110 MS: 1 InsertByte- 00:07:24.842 [2024-11-27 15:06:49.969448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093371495859991989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.842 [2024-11-27 15:06:49.969476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.842 [2024-11-27 15:06:49.969531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.842 [2024-11-27 15:06:49.969547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.842 #60 NEW cov: 12583 ft: 14597 corp: 9/472b lim: 120 exec/s: 0 rss: 73Mb L: 52/110 MS: 1 ChangeBinInt- 00:07:24.842 [2024-11-27 15:06:50.009609] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.842 [2024-11-27 15:06:50.009637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.842 [2024-11-27 15:06:50.009675] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.842 [2024-11-27 15:06:50.009691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.842 #61 NEW cov: 12583 ft: 14640 corp: 10/524b lim: 120 exec/s: 0 rss: 73Mb L: 52/110 MS: 1 CopyPart- 00:07:24.842 [2024-11-27 15:06:50.069834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.842 [2024-11-27 15:06:50.069865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.842 [2024-11-27 15:06:50.069915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.842 [2024-11-27 15:06:50.069931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.842 #62 NEW cov: 12583 ft: 14683 corp: 11/576b lim: 120 exec/s: 0 rss: 73Mb L: 52/110 MS: 1 ShuffleBytes- 00:07:24.842 [2024-11-27 15:06:50.110207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093371495859991989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.842 [2024-11-27 15:06:50.110239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.842 [2024-11-27 15:06:50.110277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.842 [2024-11-27 15:06:50.110293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.842 [2024-11-27 15:06:50.110348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.842 [2024-11-27 15:06:50.110367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.842 [2024-11-27 15:06:50.110424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13093571280643339701 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.842 [2024-11-27 15:06:50.110439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.842 #68 NEW cov: 12583 ft: 14737 corp: 12/688b lim: 120 exec/s: 0 rss: 73Mb L: 112/112 MS: 1 InsertRepeatedBytes- 00:07:24.842 [2024-11-27 15:06:50.170049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.842 [2024-11-27 15:06:50.170078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.842 [2024-11-27 15:06:50.170136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.842 [2024-11-27 15:06:50.170154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.102 #69 NEW cov: 12583 ft: 14769 corp: 13/740b lim: 120 exec/s: 0 rss: 73Mb L: 52/112 MS: 1 ShuffleBytes- 00:07:25.102 [2024-11-27 15:06:50.210139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.102 [2024-11-27 15:06:50.210167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.102 [2024-11-27 15:06:50.210221] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:38326 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.102 [2024-11-27 15:06:50.210238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.102 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:25.102 #70 NEW cov: 12606 ft: 14840 corp: 14/788b lim: 120 exec/s: 0 rss: 74Mb L: 48/112 MS: 1 ChangeBit- 00:07:25.102 [2024-11-27 15:06:50.270311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.102 [2024-11-27 15:06:50.270338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.102 [2024-11-27 15:06:50.270400] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.102 [2024-11-27 15:06:50.270417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.102 #71 NEW cov: 12606 ft: 14850 corp: 15/850b lim: 120 exec/s: 0 rss: 74Mb L: 62/112 MS: 1 InsertRepeatedBytes- 00:07:25.102 [2024-11-27 15:06:50.310422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853831093 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.102 [2024-11-27 15:06:50.310450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.102 [2024-11-27 15:06:50.310509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:38326 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.102 [2024-11-27 15:06:50.310525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.102 #72 NEW cov: 12606 ft: 14869 corp: 16/898b lim: 120 exec/s: 72 rss: 74Mb L: 48/112 MS: 1 ChangeBit- 00:07:25.102 [2024-11-27 15:06:50.370658] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093371495859991989 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.102 [2024-11-27 15:06:50.370690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.102 [2024-11-27 15:06:50.370732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.102 [2024-11-27 15:06:50.370747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.102 #73 NEW cov: 12606 ft: 14929 corp: 17/950b lim: 120 exec/s: 73 rss: 74Mb L: 52/112 MS: 1 ChangeBinInt- 00:07:25.102 [2024-11-27 15:06:50.410722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.102 [2024-11-27 15:06:50.410749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.102 [2024-11-27 15:06:50.410789] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.102 [2024-11-27 15:06:50.410806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.361 #74 NEW cov: 12606 ft: 14940 corp: 18/1003b lim: 120 exec/s: 74 rss: 74Mb L: 53/112 MS: 1 InsertByte- 00:07:25.361 [2024-11-27 15:06:50.470939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.361 [2024-11-27 15:06:50.470967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.361 [2024-11-27 15:06:50.471019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:872415232 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.361 [2024-11-27 15:06:50.471037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.361 #75 NEW cov: 12606 ft: 14985 corp: 19/1055b lim: 120 exec/s: 75 rss: 74Mb L: 52/112 MS: 1 ChangeBinInt- 00:07:25.361 [2024-11-27 15:06:50.531423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.361 [2024-11-27 15:06:50.531450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.361 [2024-11-27 15:06:50.531521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.361 [2024-11-27 15:06:50.531537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.361 [2024-11-27 15:06:50.531589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.361 [2024-11-27 15:06:50.531610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.361 [2024-11-27 15:06:50.531666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.361 [2024-11-27 15:06:50.531680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.361 #76 NEW cov: 12606 ft: 15018 corp: 20/1157b lim: 120 exec/s: 76 rss: 74Mb L: 102/112 MS: 1 InsertRepeatedBytes- 00:07:25.361 [2024-11-27 15:06:50.591588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3407872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.361 [2024-11-27 15:06:50.591619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.361 [2024-11-27 15:06:50.591677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.361 [2024-11-27 15:06:50.591695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.361 [2024-11-27 15:06:50.591751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.361 [2024-11-27 15:06:50.591767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.361 [2024-11-27 15:06:50.591820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.361 [2024-11-27 15:06:50.591835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.361 #79 NEW cov: 12606 ft: 15047 corp: 21/1253b lim: 120 exec/s: 79 rss: 74Mb L: 96/112 MS: 3 ShuffleBytes-CrossOver-InsertRepeatedBytes- 00:07:25.361 [2024-11-27 15:06:50.631331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.361 [2024-11-27 15:06:50.631358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.361 [2024-11-27 15:06:50.631413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:38320 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.361 [2024-11-27 15:06:50.631430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.361 #80 NEW cov: 12606 ft: 15123 corp: 22/1301b lim: 120 exec/s: 80 rss: 74Mb L: 48/112 MS: 1 ChangeBinInt- 00:07:25.361 [2024-11-27 15:06:50.671606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.362 [2024-11-27 15:06:50.671649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.362 [2024-11-27 15:06:50.671701] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10787728274478183861 len:46587 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.362 [2024-11-27 15:06:50.671718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.362 [2024-11-27 15:06:50.671773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.362 [2024-11-27 15:06:50.671787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.621 #81 NEW cov: 12606 ft: 15440 corp: 23/1384b lim: 120 exec/s: 81 rss: 74Mb L: 83/112 MS: 1 CopyPart- 00:07:25.621 [2024-11-27 15:06:50.731606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.621 [2024-11-27 15:06:50.731633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.621 [2024-11-27 15:06:50.731672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.621 [2024-11-27 15:06:50.731704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.621 #82 NEW cov: 12606 ft: 15467 corp: 24/1436b lim: 120 exec/s: 82 rss: 74Mb L: 52/112 MS: 1 ShuffleBytes- 00:07:25.621 [2024-11-27 15:06:50.771562] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.621 [2024-11-27 15:06:50.771589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.621 #83 NEW cov: 12606 ft: 16243 corp: 25/1468b lim: 120 exec/s: 83 rss: 74Mb L: 32/112 MS: 1 EraseBytes- 00:07:25.621 [2024-11-27 15:06:50.832236] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.621 [2024-11-27 15:06:50.832266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.621 [2024-11-27 15:06:50.832316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.621 [2024-11-27 15:06:50.832332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.621 [2024-11-27 15:06:50.832387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.621 [2024-11-27 15:06:50.832403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.621 [2024-11-27 15:06:50.832457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.621 [2024-11-27 15:06:50.832473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.621 #84 NEW cov: 12606 ft: 16250 corp: 26/1570b lim: 120 exec/s: 84 rss: 74Mb L: 102/112 MS: 1 CrossOver- 00:07:25.621 [2024-11-27 15:06:50.892107] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.621 [2024-11-27 15:06:50.892134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.621 [2024-11-27 15:06:50.892173] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.621 [2024-11-27 15:06:50.892189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.621 #85 NEW cov: 12606 ft: 16254 corp: 27/1622b lim: 120 exec/s: 85 rss: 74Mb L: 52/112 MS: 1 ShuffleBytes- 00:07:25.621 [2024-11-27 15:06:50.932302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.621 [2024-11-27 15:06:50.932329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.621 [2024-11-27 15:06:50.932392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.621 [2024-11-27 15:06:50.932407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.621 [2024-11-27 15:06:50.932464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.621 [2024-11-27 15:06:50.932479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.880 #86 NEW cov: 12606 ft: 16283 corp: 28/1704b lim: 120 exec/s: 86 rss: 74Mb L: 82/112 MS: 1 EraseBytes- 00:07:25.880 [2024-11-27 15:06:50.992482] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.880 [2024-11-27 15:06:50.992510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.880 [2024-11-27 15:06:50.992558] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.881 [2024-11-27 15:06:50.992574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.881 [2024-11-27 15:06:50.992636] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.881 [2024-11-27 15:06:50.992650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.881 #87 NEW cov: 12606 ft: 16364 corp: 29/1786b lim: 120 exec/s: 87 rss: 75Mb L: 82/112 MS: 1 CopyPart- 00:07:25.881 [2024-11-27 15:06:51.052540] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.881 [2024-11-27 15:06:51.052568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.881 [2024-11-27 15:06:51.052612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:627065225216 len:53560 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.881 [2024-11-27 15:06:51.052628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.881 #88 NEW cov: 12606 ft: 16396 corp: 30/1856b lim: 120 exec/s: 88 rss: 75Mb L: 70/112 MS: 1 CMP- DE: "\000\222u\247\3217\376\034"- 00:07:25.881 [2024-11-27 15:06:51.092950] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12442509726027787436 len:44205 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.881 [2024-11-27 15:06:51.092977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.881 [2024-11-27 15:06:51.093027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:12442509728149187756 len:44205 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.881 [2024-11-27 15:06:51.093043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.881 [2024-11-27 15:06:51.093098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:12442509728149187756 len:44205 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.881 [2024-11-27 15:06:51.093114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.881 [2024-11-27 15:06:51.093170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:12442509728149187756 len:44205 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.881 [2024-11-27 15:06:51.093186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.881 #89 NEW cov: 12606 ft: 16411 corp: 31/1967b lim: 120 exec/s: 89 rss: 75Mb L: 111/112 MS: 1 InsertByte- 00:07:25.881 [2024-11-27 15:06:51.152778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.881 [2024-11-27 15:06:51.152806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.881 [2024-11-27 15:06:51.152844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:38320 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.881 [2024-11-27 15:06:51.152860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.881 #90 NEW cov: 12606 ft: 16460 corp: 32/2015b lim: 120 exec/s: 90 rss: 75Mb L: 48/112 MS: 1 ShuffleBytes- 00:07:25.881 [2024-11-27 15:06:51.212988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093371495859991989 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.881 [2024-11-27 15:06:51.213016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.881 [2024-11-27 15:06:51.213070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46412 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.881 [2024-11-27 15:06:51.213090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.141 #91 NEW cov: 12606 ft: 16468 corp: 33/2067b lim: 120 exec/s: 91 rss: 75Mb L: 52/112 MS: 1 ChangeBinInt- 00:07:26.141 [2024-11-27 15:06:51.273133] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093571284853700021 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.141 [2024-11-27 15:06:51.273161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.141 [2024-11-27 15:06:51.273212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.141 [2024-11-27 15:06:51.273227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.141 #92 NEW cov: 12606 ft: 16480 corp: 34/2124b lim: 120 exec/s: 92 rss: 75Mb L: 57/112 MS: 1 EraseBytes- 00:07:26.141 [2024-11-27 15:06:51.313211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13093371495859991989 len:46518 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.141 [2024-11-27 15:06:51.313238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.141 [2024-11-27 15:06:51.313277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13093571283691877813 len:46412 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.141 [2024-11-27 15:06:51.313293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.141 #93 NEW cov: 12606 ft: 16496 corp: 35/2176b lim: 120 exec/s: 46 rss: 75Mb L: 52/112 MS: 1 PersAutoDict- DE: "\000\222u\247\3217\376\034"- 00:07:26.141 #93 DONE cov: 12606 ft: 16496 corp: 35/2176b lim: 120 exec/s: 46 rss: 75Mb 00:07:26.141 ###### Recommended dictionary. ###### 00:07:26.141 "\000\222u\247\3217\376\034" # Uses: 1 00:07:26.141 ###### End of recommended dictionary. ###### 00:07:26.141 Done 93 runs in 2 second(s) 00:07:26.141 15:06:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:07:26.141 15:06:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:26.141 15:06:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:26.141 15:06:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:26.141 15:06:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:26.141 15:06:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:26.141 15:06:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:26.141 15:06:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:26.141 15:06:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:26.141 15:06:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:26.141 15:06:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:26.141 15:06:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:07:26.141 15:06:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:07:26.141 15:06:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:26.141 15:06:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:26.141 15:06:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:26.141 15:06:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:26.141 15:06:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:26.141 15:06:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:07:26.400 [2024-11-27 15:06:51.494507] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:26.400 [2024-11-27 15:06:51.494565] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2373297 ] 00:07:26.400 [2024-11-27 15:06:51.677282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.400 [2024-11-27 15:06:51.710304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.658 [2024-11-27 15:06:51.769500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.658 [2024-11-27 15:06:51.785859] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:26.658 INFO: Running with entropic power schedule (0xFF, 100). 00:07:26.658 INFO: Seed: 1845836252 00:07:26.658 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:07:26.658 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:07:26.658 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:26.658 INFO: A corpus is not provided, starting from an empty corpus 00:07:26.658 #2 INITED exec/s: 0 rss: 66Mb 00:07:26.658 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:26.658 This may also happen if the target rejected all inputs we tried so far 00:07:26.658 [2024-11-27 15:06:51.851365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:26.658 [2024-11-27 15:06:51.851395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.658 [2024-11-27 15:06:51.851445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:26.658 [2024-11-27 15:06:51.851460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.658 [2024-11-27 15:06:51.851511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:26.658 [2024-11-27 15:06:51.851526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.658 [2024-11-27 15:06:51.851573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:26.658 [2024-11-27 15:06:51.851588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.916 NEW_FUNC[1/715]: 0x459378 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:26.916 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:26.916 #13 NEW cov: 12297 ft: 12298 corp: 2/91b lim: 100 exec/s: 0 rss: 73Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:07:26.916 [2024-11-27 15:06:52.192438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:26.916 [2024-11-27 15:06:52.192496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.916 [2024-11-27 15:06:52.192586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:26.916 [2024-11-27 15:06:52.192622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.916 [2024-11-27 15:06:52.192696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:26.916 [2024-11-27 15:06:52.192728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.916 [2024-11-27 15:06:52.192810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:26.916 [2024-11-27 15:06:52.192837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.916 NEW_FUNC[1/1]: 0x1a481e8 in nvme_tcp_ctrlr_connect_qpair_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_tcp.c:2299 00:07:26.916 #14 NEW cov: 12434 ft: 12987 corp: 3/185b lim: 100 exec/s: 0 rss: 73Mb L: 94/94 MS: 1 InsertRepeatedBytes- 00:07:26.916 [2024-11-27 15:06:52.242311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:26.916 [2024-11-27 15:06:52.242339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.916 [2024-11-27 15:06:52.242386] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:26.916 [2024-11-27 15:06:52.242401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.916 [2024-11-27 15:06:52.242450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:26.916 [2024-11-27 15:06:52.242464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.916 [2024-11-27 15:06:52.242515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:26.917 [2024-11-27 15:06:52.242529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.175 #15 NEW cov: 12440 ft: 13285 corp: 4/275b lim: 100 exec/s: 0 rss: 73Mb L: 90/94 MS: 1 ChangeBit- 00:07:27.175 [2024-11-27 15:06:52.302444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.175 [2024-11-27 15:06:52.302471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.175 [2024-11-27 15:06:52.302517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.175 [2024-11-27 15:06:52.302533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.175 [2024-11-27 15:06:52.302603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.175 [2024-11-27 15:06:52.302618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.175 [2024-11-27 15:06:52.302670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.175 [2024-11-27 15:06:52.302684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.175 #16 NEW cov: 12525 ft: 13572 corp: 5/365b lim: 100 exec/s: 0 rss: 73Mb L: 90/94 MS: 1 ChangeBinInt- 00:07:27.175 [2024-11-27 15:06:52.342551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.175 [2024-11-27 15:06:52.342577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.175 [2024-11-27 15:06:52.342649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.175 [2024-11-27 15:06:52.342665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.175 [2024-11-27 15:06:52.342717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.175 [2024-11-27 15:06:52.342732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.175 [2024-11-27 15:06:52.342784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.175 [2024-11-27 15:06:52.342798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.175 #17 NEW cov: 12525 ft: 13630 corp: 6/460b lim: 100 exec/s: 0 rss: 73Mb L: 95/95 MS: 1 InsertByte- 00:07:27.175 [2024-11-27 15:06:52.402747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.175 [2024-11-27 15:06:52.402774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.175 [2024-11-27 15:06:52.402827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.175 [2024-11-27 15:06:52.402840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.175 [2024-11-27 15:06:52.402890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.175 [2024-11-27 15:06:52.402905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.175 [2024-11-27 15:06:52.402955] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.175 [2024-11-27 15:06:52.402969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.175 #18 NEW cov: 12525 ft: 13792 corp: 7/551b lim: 100 exec/s: 0 rss: 73Mb L: 91/95 MS: 1 InsertByte- 00:07:27.175 [2024-11-27 15:06:52.442906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.175 [2024-11-27 15:06:52.442932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.175 [2024-11-27 15:06:52.442979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.175 [2024-11-27 15:06:52.442994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.175 [2024-11-27 15:06:52.443046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.175 [2024-11-27 15:06:52.443060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.175 [2024-11-27 15:06:52.443110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.175 [2024-11-27 15:06:52.443125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.175 #19 NEW cov: 12525 ft: 13886 corp: 8/641b lim: 100 exec/s: 0 rss: 73Mb L: 90/95 MS: 1 ChangeBinInt- 00:07:27.175 [2024-11-27 15:06:52.483009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.175 [2024-11-27 15:06:52.483034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.175 [2024-11-27 15:06:52.483102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.175 [2024-11-27 15:06:52.483118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.175 [2024-11-27 15:06:52.483167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.175 [2024-11-27 15:06:52.483182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.175 [2024-11-27 15:06:52.483235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.176 [2024-11-27 15:06:52.483250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.435 #20 NEW cov: 12525 ft: 13976 corp: 9/736b lim: 100 exec/s: 0 rss: 74Mb L: 95/95 MS: 1 ChangeBinInt- 00:07:27.435 [2024-11-27 15:06:52.543119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.435 [2024-11-27 15:06:52.543164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.435 [2024-11-27 15:06:52.543214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.435 [2024-11-27 15:06:52.543229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.435 [2024-11-27 15:06:52.543279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.435 [2024-11-27 15:06:52.543295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.435 [2024-11-27 15:06:52.543347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.435 [2024-11-27 15:06:52.543362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.435 #21 NEW cov: 12525 ft: 14040 corp: 10/826b lim: 100 exec/s: 0 rss: 74Mb L: 90/95 MS: 1 ChangeByte- 00:07:27.435 [2024-11-27 15:06:52.603367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.435 [2024-11-27 15:06:52.603393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.435 [2024-11-27 15:06:52.603445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.435 [2024-11-27 15:06:52.603457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.435 [2024-11-27 15:06:52.603508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.435 [2024-11-27 15:06:52.603523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.435 [2024-11-27 15:06:52.603575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.435 [2024-11-27 15:06:52.603588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.435 #22 NEW cov: 12525 ft: 14128 corp: 11/916b lim: 100 exec/s: 0 rss: 74Mb L: 90/95 MS: 1 ChangeBit- 00:07:27.435 [2024-11-27 15:06:52.663497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.435 [2024-11-27 15:06:52.663524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.435 [2024-11-27 15:06:52.663590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.435 [2024-11-27 15:06:52.663611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.435 [2024-11-27 15:06:52.663662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.435 [2024-11-27 15:06:52.663677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.435 [2024-11-27 15:06:52.663731] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.435 [2024-11-27 15:06:52.663757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.435 #23 NEW cov: 12525 ft: 14147 corp: 12/1006b lim: 100 exec/s: 0 rss: 74Mb L: 90/95 MS: 1 ChangeByte- 00:07:27.435 [2024-11-27 15:06:52.703625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.435 [2024-11-27 15:06:52.703653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.435 [2024-11-27 15:06:52.703696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.435 [2024-11-27 15:06:52.703711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.435 [2024-11-27 15:06:52.703763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.435 [2024-11-27 15:06:52.703777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.435 [2024-11-27 15:06:52.703831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.435 [2024-11-27 15:06:52.703845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.435 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:27.435 #24 NEW cov: 12548 ft: 14187 corp: 13/1103b lim: 100 exec/s: 0 rss: 74Mb L: 97/97 MS: 1 InsertRepeatedBytes- 00:07:27.435 [2024-11-27 15:06:52.743713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.436 [2024-11-27 15:06:52.743738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.436 [2024-11-27 15:06:52.743809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.436 [2024-11-27 15:06:52.743824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.436 [2024-11-27 15:06:52.743875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.436 [2024-11-27 15:06:52.743889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.436 [2024-11-27 15:06:52.743939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.436 [2024-11-27 15:06:52.743953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.436 #25 NEW cov: 12548 ft: 14205 corp: 14/1197b lim: 100 exec/s: 0 rss: 74Mb L: 94/97 MS: 1 ChangeBit- 00:07:27.695 [2024-11-27 15:06:52.784001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.695 [2024-11-27 15:06:52.784026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.695 [2024-11-27 15:06:52.784095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.695 [2024-11-27 15:06:52.784108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.695 [2024-11-27 15:06:52.784158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.695 [2024-11-27 15:06:52.784172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.695 [2024-11-27 15:06:52.784223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.695 [2024-11-27 15:06:52.784238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.695 [2024-11-27 15:06:52.784290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:27.695 [2024-11-27 15:06:52.784305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:27.695 #26 NEW cov: 12548 ft: 14267 corp: 15/1297b lim: 100 exec/s: 0 rss: 74Mb L: 100/100 MS: 1 CrossOver- 00:07:27.695 [2024-11-27 15:06:52.844023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.695 [2024-11-27 15:06:52.844049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.695 [2024-11-27 15:06:52.844099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.695 [2024-11-27 15:06:52.844114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.695 [2024-11-27 15:06:52.844165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.695 [2024-11-27 15:06:52.844180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.695 [2024-11-27 15:06:52.844232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.695 [2024-11-27 15:06:52.844246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.695 #27 NEW cov: 12548 ft: 14285 corp: 16/1390b lim: 100 exec/s: 27 rss: 74Mb L: 93/100 MS: 1 CopyPart- 00:07:27.695 [2024-11-27 15:06:52.884158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.695 [2024-11-27 15:06:52.884184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.695 [2024-11-27 15:06:52.884234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.695 [2024-11-27 15:06:52.884249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.695 [2024-11-27 15:06:52.884298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.695 [2024-11-27 15:06:52.884312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.695 [2024-11-27 15:06:52.884360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.695 [2024-11-27 15:06:52.884373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.695 #28 NEW cov: 12548 ft: 14301 corp: 17/1480b lim: 100 exec/s: 28 rss: 74Mb L: 90/100 MS: 1 CrossOver- 00:07:27.695 [2024-11-27 15:06:52.944298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.695 [2024-11-27 15:06:52.944324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.695 [2024-11-27 15:06:52.944389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.695 [2024-11-27 15:06:52.944404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.695 [2024-11-27 15:06:52.944454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.695 [2024-11-27 15:06:52.944468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.695 [2024-11-27 15:06:52.944519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.695 [2024-11-27 15:06:52.944532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.695 #29 NEW cov: 12548 ft: 14315 corp: 18/1570b lim: 100 exec/s: 29 rss: 74Mb L: 90/100 MS: 1 ChangeBinInt- 00:07:27.695 [2024-11-27 15:06:53.004471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.695 [2024-11-27 15:06:53.004497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.695 [2024-11-27 15:06:53.004545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.696 [2024-11-27 15:06:53.004559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.696 [2024-11-27 15:06:53.004617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.696 [2024-11-27 15:06:53.004632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.696 [2024-11-27 15:06:53.004682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.696 [2024-11-27 15:06:53.004697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.696 #30 NEW cov: 12548 ft: 14338 corp: 19/1660b lim: 100 exec/s: 30 rss: 74Mb L: 90/100 MS: 1 ChangeBit- 00:07:27.955 [2024-11-27 15:06:53.044588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.955 [2024-11-27 15:06:53.044617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.955 [2024-11-27 15:06:53.044667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.955 [2024-11-27 15:06:53.044682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.955 [2024-11-27 15:06:53.044730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.955 [2024-11-27 15:06:53.044744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.955 [2024-11-27 15:06:53.044793] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.955 [2024-11-27 15:06:53.044808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.955 #31 NEW cov: 12548 ft: 14384 corp: 20/1750b lim: 100 exec/s: 31 rss: 74Mb L: 90/100 MS: 1 ChangeBit- 00:07:27.955 [2024-11-27 15:06:53.104754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.955 [2024-11-27 15:06:53.104780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.955 [2024-11-27 15:06:53.104834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.955 [2024-11-27 15:06:53.104848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.955 [2024-11-27 15:06:53.104900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.955 [2024-11-27 15:06:53.104914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.955 [2024-11-27 15:06:53.104966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.955 [2024-11-27 15:06:53.104981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.955 #32 NEW cov: 12548 ft: 14417 corp: 21/1847b lim: 100 exec/s: 32 rss: 74Mb L: 97/100 MS: 1 InsertRepeatedBytes- 00:07:27.955 [2024-11-27 15:06:53.164910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.955 [2024-11-27 15:06:53.164936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.955 [2024-11-27 15:06:53.165001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.955 [2024-11-27 15:06:53.165017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.955 [2024-11-27 15:06:53.165066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.955 [2024-11-27 15:06:53.165079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.955 [2024-11-27 15:06:53.165134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.955 [2024-11-27 15:06:53.165149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.955 #33 NEW cov: 12548 ft: 14420 corp: 22/1943b lim: 100 exec/s: 33 rss: 74Mb L: 96/100 MS: 1 InsertRepeatedBytes- 00:07:27.955 [2024-11-27 15:06:53.225056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.955 [2024-11-27 15:06:53.225084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.955 [2024-11-27 15:06:53.225133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.955 [2024-11-27 15:06:53.225147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.955 [2024-11-27 15:06:53.225197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.955 [2024-11-27 15:06:53.225212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.955 [2024-11-27 15:06:53.225263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.955 [2024-11-27 15:06:53.225277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.955 #34 NEW cov: 12548 ft: 14423 corp: 23/2039b lim: 100 exec/s: 34 rss: 75Mb L: 96/100 MS: 1 ChangeByte- 00:07:27.955 [2024-11-27 15:06:53.285107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.955 [2024-11-27 15:06:53.285132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.955 [2024-11-27 15:06:53.285184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.955 [2024-11-27 15:06:53.285198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.955 [2024-11-27 15:06:53.285251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.955 [2024-11-27 15:06:53.285266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.214 #35 NEW cov: 12548 ft: 14705 corp: 24/2104b lim: 100 exec/s: 35 rss: 75Mb L: 65/100 MS: 1 EraseBytes- 00:07:28.214 [2024-11-27 15:06:53.345402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.214 [2024-11-27 15:06:53.345428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.214 [2024-11-27 15:06:53.345477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:28.214 [2024-11-27 15:06:53.345488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.214 [2024-11-27 15:06:53.345537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:28.214 [2024-11-27 15:06:53.345551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.214 [2024-11-27 15:06:53.345606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:28.214 [2024-11-27 15:06:53.345620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.214 #36 NEW cov: 12548 ft: 14784 corp: 25/2194b lim: 100 exec/s: 36 rss: 75Mb L: 90/100 MS: 1 CopyPart- 00:07:28.214 [2024-11-27 15:06:53.405563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.214 [2024-11-27 15:06:53.405588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.214 [2024-11-27 15:06:53.405664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:28.214 [2024-11-27 15:06:53.405679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.214 [2024-11-27 15:06:53.405729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:28.214 [2024-11-27 15:06:53.405743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.214 [2024-11-27 15:06:53.405794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:28.214 [2024-11-27 15:06:53.405809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.214 #37 NEW cov: 12548 ft: 14796 corp: 26/2291b lim: 100 exec/s: 37 rss: 75Mb L: 97/100 MS: 1 CopyPart- 00:07:28.214 [2024-11-27 15:06:53.465746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.214 [2024-11-27 15:06:53.465771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.214 [2024-11-27 15:06:53.465842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:28.214 [2024-11-27 15:06:53.465856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.214 [2024-11-27 15:06:53.465906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:28.214 [2024-11-27 15:06:53.465920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.214 [2024-11-27 15:06:53.465971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:28.215 [2024-11-27 15:06:53.465986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.215 #38 NEW cov: 12548 ft: 14818 corp: 27/2381b lim: 100 exec/s: 38 rss: 75Mb L: 90/100 MS: 1 ShuffleBytes- 00:07:28.215 [2024-11-27 15:06:53.505729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.215 [2024-11-27 15:06:53.505755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.215 [2024-11-27 15:06:53.505819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:28.215 [2024-11-27 15:06:53.505834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.215 [2024-11-27 15:06:53.505888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:28.215 [2024-11-27 15:06:53.505903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.215 #39 NEW cov: 12548 ft: 14832 corp: 28/2453b lim: 100 exec/s: 39 rss: 75Mb L: 72/100 MS: 1 EraseBytes- 00:07:28.474 [2024-11-27 15:06:53.566048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.474 [2024-11-27 15:06:53.566074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.474 [2024-11-27 15:06:53.566122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:28.474 [2024-11-27 15:06:53.566137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.474 [2024-11-27 15:06:53.566189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:28.474 [2024-11-27 15:06:53.566204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.474 [2024-11-27 15:06:53.566259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:28.474 [2024-11-27 15:06:53.566274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.474 #40 NEW cov: 12548 ft: 14836 corp: 29/2540b lim: 100 exec/s: 40 rss: 75Mb L: 87/100 MS: 1 EraseBytes- 00:07:28.474 [2024-11-27 15:06:53.605977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.474 [2024-11-27 15:06:53.606003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.474 [2024-11-27 15:06:53.606050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:28.474 [2024-11-27 15:06:53.606063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.474 [2024-11-27 15:06:53.606114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:28.474 [2024-11-27 15:06:53.606128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.474 #41 NEW cov: 12548 ft: 14844 corp: 30/2605b lim: 100 exec/s: 41 rss: 75Mb L: 65/100 MS: 1 ShuffleBytes- 00:07:28.474 [2024-11-27 15:06:53.666253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.474 [2024-11-27 15:06:53.666278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.474 [2024-11-27 15:06:53.666348] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:28.474 [2024-11-27 15:06:53.666363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.474 [2024-11-27 15:06:53.666414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:28.474 [2024-11-27 15:06:53.666428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.474 [2024-11-27 15:06:53.666481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:28.474 [2024-11-27 15:06:53.666497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.474 #42 NEW cov: 12548 ft: 14858 corp: 31/2702b lim: 100 exec/s: 42 rss: 75Mb L: 97/100 MS: 1 ChangeBit- 00:07:28.474 [2024-11-27 15:06:53.706136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.474 [2024-11-27 15:06:53.706161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.474 [2024-11-27 15:06:53.706200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:28.474 [2024-11-27 15:06:53.706215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.474 #43 NEW cov: 12548 ft: 15210 corp: 32/2750b lim: 100 exec/s: 43 rss: 75Mb L: 48/100 MS: 1 EraseBytes- 00:07:28.474 [2024-11-27 15:06:53.746728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.474 [2024-11-27 15:06:53.746754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.474 [2024-11-27 15:06:53.746809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:28.474 [2024-11-27 15:06:53.746824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.474 [2024-11-27 15:06:53.746873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:28.474 [2024-11-27 15:06:53.746890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.474 [2024-11-27 15:06:53.746942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:28.474 [2024-11-27 15:06:53.746956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.474 #44 NEW cov: 12557 ft: 15257 corp: 33/2847b lim: 100 exec/s: 44 rss: 75Mb L: 97/100 MS: 1 ChangeByte- 00:07:28.474 [2024-11-27 15:06:53.786609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.474 [2024-11-27 15:06:53.786635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.474 [2024-11-27 15:06:53.786703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:28.474 [2024-11-27 15:06:53.786718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.474 [2024-11-27 15:06:53.786769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:28.474 [2024-11-27 15:06:53.786783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.474 [2024-11-27 15:06:53.786835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:28.475 [2024-11-27 15:06:53.786850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.475 #45 NEW cov: 12557 ft: 15268 corp: 34/2933b lim: 100 exec/s: 45 rss: 75Mb L: 86/100 MS: 1 EraseBytes- 00:07:28.734 [2024-11-27 15:06:53.826682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.734 [2024-11-27 15:06:53.826708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.734 [2024-11-27 15:06:53.826777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:28.734 [2024-11-27 15:06:53.826790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.734 [2024-11-27 15:06:53.826842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:28.734 [2024-11-27 15:06:53.826857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.734 [2024-11-27 15:06:53.826910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:28.734 [2024-11-27 15:06:53.826925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.734 #46 NEW cov: 12557 ft: 15289 corp: 35/3030b lim: 100 exec/s: 23 rss: 75Mb L: 97/100 MS: 1 InsertByte- 00:07:28.734 #46 DONE cov: 12557 ft: 15289 corp: 35/3030b lim: 100 exec/s: 23 rss: 75Mb 00:07:28.734 Done 46 runs in 2 second(s) 00:07:28.734 15:06:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:07:28.734 15:06:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:28.734 15:06:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:28.734 15:06:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:28.734 15:06:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:28.734 15:06:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:28.734 15:06:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:28.734 15:06:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:28.734 15:06:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:28.734 15:06:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:28.734 15:06:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:28.734 15:06:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:07:28.734 15:06:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:07:28.734 15:06:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:28.734 15:06:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:28.734 15:06:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:28.734 15:06:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:28.734 15:06:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:28.734 15:06:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:07:28.734 [2024-11-27 15:06:53.976952] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:28.734 [2024-11-27 15:06:53.977009] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2373613 ] 00:07:28.994 [2024-11-27 15:06:54.163133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.994 [2024-11-27 15:06:54.196581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.994 [2024-11-27 15:06:54.255735] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.994 [2024-11-27 15:06:54.272092] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:28.994 INFO: Running with entropic power schedule (0xFF, 100). 00:07:28.994 INFO: Seed: 37873726 00:07:28.994 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:07:28.994 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:07:28.994 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:28.994 INFO: A corpus is not provided, starting from an empty corpus 00:07:28.994 #2 INITED exec/s: 0 rss: 65Mb 00:07:28.994 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:28.994 This may also happen if the target rejected all inputs we tried so far 00:07:28.994 [2024-11-27 15:06:54.327318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168443905 len:1 00:07:28.994 [2024-11-27 15:06:54.327352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.511 NEW_FUNC[1/716]: 0x45c338 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:29.511 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:29.511 #5 NEW cov: 12300 ft: 12285 corp: 2/12b lim: 50 exec/s: 0 rss: 73Mb L: 11/11 MS: 3 CrossOver-InsertByte-CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:29.511 [2024-11-27 15:06:54.638250] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3255307777123958785 len:11566 00:07:29.511 [2024-11-27 15:06:54.638284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.511 [2024-11-27 15:06:54.638340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3255307777713450285 len:1 00:07:29.511 [2024-11-27 15:06:54.638360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.511 #11 NEW cov: 12413 ft: 13310 corp: 3/37b lim: 50 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:29.511 [2024-11-27 15:06:54.698238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168443905 len:1 00:07:29.511 [2024-11-27 15:06:54.698267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.511 #12 NEW cov: 12419 ft: 13501 corp: 4/52b lim: 50 exec/s: 0 rss: 73Mb L: 15/25 MS: 1 CMP- DE: "\377\377\000\000"- 00:07:29.511 [2024-11-27 15:06:54.738415] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3255307777123958785 len:11566 00:07:29.511 [2024-11-27 15:06:54.738445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.511 [2024-11-27 15:06:54.738481] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3255307777713450285 len:1 00:07:29.511 [2024-11-27 15:06:54.738497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.511 #13 NEW cov: 12504 ft: 13848 corp: 5/78b lim: 50 exec/s: 0 rss: 74Mb L: 26/26 MS: 1 InsertByte- 00:07:29.511 [2024-11-27 15:06:54.798442] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168443905 len:1 00:07:29.511 [2024-11-27 15:06:54.798470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.511 #14 NEW cov: 12504 ft: 14044 corp: 6/97b lim: 50 exec/s: 0 rss: 74Mb L: 19/26 MS: 1 InsertRepeatedBytes- 00:07:29.511 [2024-11-27 15:06:54.838588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168443905 len:1 00:07:29.511 [2024-11-27 15:06:54.838620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.769 #15 NEW cov: 12504 ft: 14105 corp: 7/116b lim: 50 exec/s: 0 rss: 74Mb L: 19/26 MS: 1 ChangeBit- 00:07:29.769 [2024-11-27 15:06:54.898805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3255307583850430465 len:1 00:07:29.769 [2024-11-27 15:06:54.898833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.769 #16 NEW cov: 12504 ft: 14157 corp: 8/130b lim: 50 exec/s: 0 rss: 74Mb L: 14/26 MS: 1 EraseBytes- 00:07:29.769 [2024-11-27 15:06:54.938843] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168443905 len:1 00:07:29.769 [2024-11-27 15:06:54.938872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.769 #17 NEW cov: 12504 ft: 14231 corp: 9/149b lim: 50 exec/s: 0 rss: 74Mb L: 19/26 MS: 1 ChangeBit- 00:07:29.769 [2024-11-27 15:06:54.978993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:34085028904961 len:1 00:07:29.769 [2024-11-27 15:06:54.979021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.769 #18 NEW cov: 12504 ft: 14299 corp: 10/164b lim: 50 exec/s: 0 rss: 74Mb L: 15/26 MS: 1 CMP- DE: "\000\037"- 00:07:29.769 [2024-11-27 15:06:55.039281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3255307777123958785 len:11566 00:07:29.769 [2024-11-27 15:06:55.039308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.769 [2024-11-27 15:06:55.039364] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3255307777713450285 len:1 00:07:29.769 [2024-11-27 15:06:55.039381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.769 #19 NEW cov: 12504 ft: 14403 corp: 11/191b lim: 50 exec/s: 0 rss: 74Mb L: 27/27 MS: 1 InsertByte- 00:07:29.769 [2024-11-27 15:06:55.099340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3255307583850430465 len:1 00:07:29.769 [2024-11-27 15:06:55.099367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.028 #21 NEW cov: 12504 ft: 14478 corp: 12/208b lim: 50 exec/s: 0 rss: 74Mb L: 17/27 MS: 2 PersAutoDict-CrossOver- DE: "\000\037"- 00:07:30.028 [2024-11-27 15:06:55.139548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4463411201 len:1 00:07:30.028 [2024-11-27 15:06:55.139575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.028 [2024-11-27 15:06:55.139639] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:17220 00:07:30.028 [2024-11-27 15:06:55.139657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.028 #22 NEW cov: 12504 ft: 14507 corp: 13/235b lim: 50 exec/s: 0 rss: 74Mb L: 27/27 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:30.028 [2024-11-27 15:06:55.199717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3246300577869217793 len:11566 00:07:30.028 [2024-11-27 15:06:55.199744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.028 [2024-11-27 15:06:55.199799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3255307777713450285 len:1 00:07:30.028 [2024-11-27 15:06:55.199816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.028 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:30.028 #23 NEW cov: 12527 ft: 14595 corp: 14/261b lim: 50 exec/s: 0 rss: 74Mb L: 26/27 MS: 1 ChangeBit- 00:07:30.028 [2024-11-27 15:06:55.240116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3255307777123958785 len:11566 00:07:30.028 [2024-11-27 15:06:55.240143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.028 [2024-11-27 15:06:55.240209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:84773640042318400 len:1 00:07:30.028 [2024-11-27 15:06:55.240225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.028 [2024-11-27 15:06:55.240277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:3255307776955514880 len:11566 00:07:30.028 [2024-11-27 15:06:55.240293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.028 [2024-11-27 15:06:55.240346] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2949120 len:41985 00:07:30.028 [2024-11-27 15:06:55.240363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.028 #24 NEW cov: 12527 ft: 14938 corp: 15/301b lim: 50 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 CrossOver- 00:07:30.028 [2024-11-27 15:06:55.279809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1095385088000 len:65281 00:07:30.028 [2024-11-27 15:06:55.279837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.028 #25 NEW cov: 12527 ft: 14973 corp: 16/312b lim: 50 exec/s: 0 rss: 74Mb L: 11/40 MS: 1 EraseBytes- 00:07:30.028 [2024-11-27 15:06:55.319917] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168443905 len:256 00:07:30.028 [2024-11-27 15:06:55.319947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.028 #26 NEW cov: 12527 ft: 14995 corp: 17/327b lim: 50 exec/s: 26 rss: 74Mb L: 15/40 MS: 1 CopyPart- 00:07:30.287 [2024-11-27 15:06:55.380161] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:281475145154561 len:1 00:07:30.287 [2024-11-27 15:06:55.380189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.287 #27 NEW cov: 12527 ft: 15012 corp: 18/342b lim: 50 exec/s: 27 rss: 74Mb L: 15/40 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:30.287 [2024-11-27 15:06:55.420252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168443905 len:1 00:07:30.287 [2024-11-27 15:06:55.420281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.287 #28 NEW cov: 12527 ft: 15022 corp: 19/361b lim: 50 exec/s: 28 rss: 74Mb L: 19/40 MS: 1 ChangeBit- 00:07:30.287 [2024-11-27 15:06:55.460373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168443905 len:1 00:07:30.287 [2024-11-27 15:06:55.460402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.287 #29 NEW cov: 12527 ft: 15086 corp: 20/380b lim: 50 exec/s: 29 rss: 74Mb L: 19/40 MS: 1 ChangeBinInt- 00:07:30.287 [2024-11-27 15:06:55.500605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3255307777123958785 len:11566 00:07:30.287 [2024-11-27 15:06:55.500634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.287 [2024-11-27 15:06:55.500685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3255307777713450285 len:5 00:07:30.287 [2024-11-27 15:06:55.500701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.287 #30 NEW cov: 12527 ft: 15093 corp: 21/407b lim: 50 exec/s: 30 rss: 74Mb L: 27/40 MS: 1 ChangeBit- 00:07:30.287 [2024-11-27 15:06:55.560680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3314649325913112576 len:65536 00:07:30.287 [2024-11-27 15:06:55.560708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.287 #31 NEW cov: 12527 ft: 15105 corp: 22/419b lim: 50 exec/s: 31 rss: 75Mb L: 12/40 MS: 1 InsertByte- 00:07:30.287 [2024-11-27 15:06:55.620837] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168558592 len:256 00:07:30.287 [2024-11-27 15:06:55.620865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.546 #32 NEW cov: 12527 ft: 15107 corp: 23/431b lim: 50 exec/s: 32 rss: 75Mb L: 12/40 MS: 1 ChangeBinInt- 00:07:30.546 [2024-11-27 15:06:55.681023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16777216 len:1 00:07:30.546 [2024-11-27 15:06:55.681050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.546 #33 NEW cov: 12527 ft: 15120 corp: 24/450b lim: 50 exec/s: 33 rss: 75Mb L: 19/40 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:30.546 [2024-11-27 15:06:55.741385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168443904 len:1 00:07:30.546 [2024-11-27 15:06:55.741412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.546 [2024-11-27 15:06:55.741459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3255307776955514881 len:11566 00:07:30.546 [2024-11-27 15:06:55.741479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.546 [2024-11-27 15:06:55.741535] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:3255307777713450285 len:1 00:07:30.546 [2024-11-27 15:06:55.741552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.546 #34 NEW cov: 12527 ft: 15368 corp: 25/486b lim: 50 exec/s: 34 rss: 75Mb L: 36/40 MS: 1 InsertRepeatedBytes- 00:07:30.546 [2024-11-27 15:06:55.781310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168558592 len:256 00:07:30.546 [2024-11-27 15:06:55.781339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.546 #35 NEW cov: 12527 ft: 15408 corp: 26/498b lim: 50 exec/s: 35 rss: 75Mb L: 12/40 MS: 1 ShuffleBytes- 00:07:30.546 [2024-11-27 15:06:55.841454] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:281475145154561 len:1 00:07:30.546 [2024-11-27 15:06:55.841483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.546 #36 NEW cov: 12527 ft: 15409 corp: 27/514b lim: 50 exec/s: 36 rss: 75Mb L: 16/40 MS: 1 InsertByte- 00:07:30.805 [2024-11-27 15:06:55.901662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:34085028904961 len:1 00:07:30.805 [2024-11-27 15:06:55.901691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.805 #37 NEW cov: 12527 ft: 15415 corp: 28/529b lim: 50 exec/s: 37 rss: 75Mb L: 15/40 MS: 1 ChangeBinInt- 00:07:30.805 [2024-11-27 15:06:55.941733] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:78198638002177 len:1 00:07:30.805 [2024-11-27 15:06:55.941761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.805 #38 NEW cov: 12527 ft: 15425 corp: 29/545b lim: 50 exec/s: 38 rss: 75Mb L: 16/40 MS: 1 InsertByte- 00:07:30.805 [2024-11-27 15:06:56.002262] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12716045417316352 len:11566 00:07:30.805 [2024-11-27 15:06:56.002290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.805 [2024-11-27 15:06:56.002343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4612017165458803210 len:11521 00:07:30.805 [2024-11-27 15:06:56.002360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.805 [2024-11-27 15:06:56.002414] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:12716045248757760 len:11566 00:07:30.806 [2024-11-27 15:06:56.002430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.806 [2024-11-27 15:06:56.002486] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:754986240 len:1 00:07:30.806 [2024-11-27 15:06:56.002502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.806 #39 NEW cov: 12527 ft: 15442 corp: 30/591b lim: 50 exec/s: 39 rss: 75Mb L: 46/46 MS: 1 CrossOver- 00:07:30.806 [2024-11-27 15:06:56.042012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:168558592 len:256 00:07:30.806 [2024-11-27 15:06:56.042039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.806 #40 NEW cov: 12527 ft: 15447 corp: 31/603b lim: 50 exec/s: 40 rss: 75Mb L: 12/46 MS: 1 ChangeByte- 00:07:30.806 [2024-11-27 15:06:56.082376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17578661995800687603 len:62452 00:07:30.806 [2024-11-27 15:06:56.082405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.806 [2024-11-27 15:06:56.082444] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17578661999652631539 len:62452 00:07:30.806 [2024-11-27 15:06:56.082461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.806 [2024-11-27 15:06:56.082515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17578660998925251571 len:16386 00:07:30.806 [2024-11-27 15:06:56.082531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.806 #45 NEW cov: 12527 ft: 15479 corp: 32/633b lim: 50 exec/s: 45 rss: 75Mb L: 30/46 MS: 5 CrossOver-ShuffleBytes-ChangeBit-InsertByte-InsertRepeatedBytes- 00:07:30.806 [2024-11-27 15:06:56.122519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:84773639455115840 len:11566 00:07:30.806 [2024-11-27 15:06:56.122545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.806 [2024-11-27 15:06:56.122596] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:738591439158324525 len:32 00:07:30.806 [2024-11-27 15:06:56.122616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.806 [2024-11-27 15:06:56.122669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18387633422362214654 len:11566 00:07:30.806 [2024-11-27 15:06:56.122683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.066 #46 NEW cov: 12527 ft: 15484 corp: 33/669b lim: 50 exec/s: 46 rss: 75Mb L: 36/46 MS: 1 CrossOver- 00:07:31.066 [2024-11-27 15:06:56.162382] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:34085028904961 len:1 00:07:31.066 [2024-11-27 15:06:56.162410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.066 #47 NEW cov: 12527 ft: 15498 corp: 34/684b lim: 50 exec/s: 47 rss: 75Mb L: 15/46 MS: 1 ShuffleBytes- 00:07:31.066 [2024-11-27 15:06:56.202841] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374695327472697599 len:3085 00:07:31.066 [2024-11-27 15:06:56.202869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.066 [2024-11-27 15:06:56.202921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:868082074056920076 len:3085 00:07:31.066 [2024-11-27 15:06:56.202937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.066 [2024-11-27 15:06:56.202989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:868082074056920076 len:3085 00:07:31.066 [2024-11-27 15:06:56.203004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.066 [2024-11-27 15:06:56.203060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:868082074056920076 len:3085 00:07:31.066 [2024-11-27 15:06:56.203075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.066 #50 NEW cov: 12527 ft: 15520 corp: 35/732b lim: 50 exec/s: 50 rss: 75Mb L: 48/48 MS: 3 EraseBytes-ChangeBit-InsertRepeatedBytes- 00:07:31.066 [2024-11-27 15:06:56.262886] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:949464767910248749 len:11566 00:07:31.066 [2024-11-27 15:06:56.262913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.066 [2024-11-27 15:06:56.262978] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3255307584439921965 len:1 00:07:31.066 [2024-11-27 15:06:56.262995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.066 [2024-11-27 15:06:56.263050] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16385 len:1 00:07:31.066 [2024-11-27 15:06:56.263064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.066 #56 NEW cov: 12527 ft: 15531 corp: 36/767b lim: 50 exec/s: 56 rss: 75Mb L: 35/48 MS: 1 CrossOver- 00:07:31.066 [2024-11-27 15:06:56.302723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:281479440121857 len:2 00:07:31.066 [2024-11-27 15:06:56.302751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.066 #57 NEW cov: 12527 ft: 15543 corp: 37/782b lim: 50 exec/s: 28 rss: 75Mb L: 15/48 MS: 1 CMP- DE: "\001\000\001\312"- 00:07:31.066 #57 DONE cov: 12527 ft: 15543 corp: 37/782b lim: 50 exec/s: 28 rss: 75Mb 00:07:31.066 ###### Recommended dictionary. ###### 00:07:31.066 "\001\000\000\000\000\000\000\000" # Uses: 3 00:07:31.066 "\377\377\000\000" # Uses: 0 00:07:31.066 "\000\037" # Uses: 1 00:07:31.066 "\001\000\001\312" # Uses: 0 00:07:31.066 ###### End of recommended dictionary. ###### 00:07:31.066 Done 57 runs in 2 second(s) 00:07:31.326 15:06:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:31.326 15:06:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:31.326 15:06:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:31.326 15:06:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:31.326 15:06:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:31.326 15:06:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:31.326 15:06:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:31.326 15:06:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:31.326 15:06:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:31.326 15:06:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:31.326 15:06:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:31.326 15:06:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:31.326 15:06:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:31.326 15:06:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:31.326 15:06:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:31.326 15:06:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:31.326 15:06:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:31.326 15:06:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:31.326 15:06:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:31.326 [2024-11-27 15:06:56.473262] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:31.326 [2024-11-27 15:06:56.473342] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2374118 ] 00:07:31.326 [2024-11-27 15:06:56.656829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.585 [2024-11-27 15:06:56.691396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.585 [2024-11-27 15:06:56.750363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.585 [2024-11-27 15:06:56.766718] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:31.585 INFO: Running with entropic power schedule (0xFF, 100). 00:07:31.585 INFO: Seed: 2531883896 00:07:31.585 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:07:31.585 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:07:31.585 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:31.585 INFO: A corpus is not provided, starting from an empty corpus 00:07:31.585 #2 INITED exec/s: 0 rss: 65Mb 00:07:31.585 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:31.585 This may also happen if the target rejected all inputs we tried so far 00:07:31.585 [2024-11-27 15:06:56.812339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:31.585 [2024-11-27 15:06:56.812370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.585 [2024-11-27 15:06:56.812409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:31.585 [2024-11-27 15:06:56.812425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.585 [2024-11-27 15:06:56.812474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:31.585 [2024-11-27 15:06:56.812490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.585 [2024-11-27 15:06:56.812541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:31.585 [2024-11-27 15:06:56.812556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.844 NEW_FUNC[1/718]: 0x45def8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:31.844 NEW_FUNC[2/718]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:31.844 #3 NEW cov: 12358 ft: 12357 corp: 2/83b lim: 90 exec/s: 0 rss: 73Mb L: 82/82 MS: 1 InsertRepeatedBytes- 00:07:31.844 [2024-11-27 15:06:57.143737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:31.844 [2024-11-27 15:06:57.143833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.844 [2024-11-27 15:06:57.143947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:31.844 [2024-11-27 15:06:57.143977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.844 [2024-11-27 15:06:57.144054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:31.844 [2024-11-27 15:06:57.144082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.844 [2024-11-27 15:06:57.144158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:31.844 [2024-11-27 15:06:57.144186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.103 #4 NEW cov: 12471 ft: 12997 corp: 3/171b lim: 90 exec/s: 0 rss: 73Mb L: 88/88 MS: 1 InsertRepeatedBytes- 00:07:32.103 [2024-11-27 15:06:57.213299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.103 [2024-11-27 15:06:57.213328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.103 [2024-11-27 15:06:57.213376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.103 [2024-11-27 15:06:57.213391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.103 [2024-11-27 15:06:57.213442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.103 [2024-11-27 15:06:57.213458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.103 [2024-11-27 15:06:57.213512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:32.103 [2024-11-27 15:06:57.213527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.103 #10 NEW cov: 12477 ft: 13332 corp: 4/253b lim: 90 exec/s: 0 rss: 73Mb L: 82/88 MS: 1 CopyPart- 00:07:32.103 [2024-11-27 15:06:57.253398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.103 [2024-11-27 15:06:57.253425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.103 [2024-11-27 15:06:57.253472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.103 [2024-11-27 15:06:57.253488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.103 [2024-11-27 15:06:57.253538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.103 [2024-11-27 15:06:57.253553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.103 [2024-11-27 15:06:57.253607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:32.103 [2024-11-27 15:06:57.253622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.103 #16 NEW cov: 12562 ft: 13640 corp: 5/341b lim: 90 exec/s: 0 rss: 73Mb L: 88/88 MS: 1 ChangeByte- 00:07:32.103 [2024-11-27 15:06:57.313249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.103 [2024-11-27 15:06:57.313276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.103 [2024-11-27 15:06:57.313315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.103 [2024-11-27 15:06:57.313330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.103 #17 NEW cov: 12562 ft: 14152 corp: 6/394b lim: 90 exec/s: 0 rss: 73Mb L: 53/88 MS: 1 EraseBytes- 00:07:32.103 [2024-11-27 15:06:57.373728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.103 [2024-11-27 15:06:57.373756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.103 [2024-11-27 15:06:57.373805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.103 [2024-11-27 15:06:57.373821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.103 [2024-11-27 15:06:57.373873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.103 [2024-11-27 15:06:57.373891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.103 [2024-11-27 15:06:57.373943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:32.103 [2024-11-27 15:06:57.373958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.103 #23 NEW cov: 12562 ft: 14247 corp: 7/476b lim: 90 exec/s: 0 rss: 73Mb L: 82/88 MS: 1 ShuffleBytes- 00:07:32.103 [2024-11-27 15:06:57.433914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.103 [2024-11-27 15:06:57.433942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.103 [2024-11-27 15:06:57.433991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.103 [2024-11-27 15:06:57.434006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.103 [2024-11-27 15:06:57.434058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.103 [2024-11-27 15:06:57.434074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.103 [2024-11-27 15:06:57.434124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:32.103 [2024-11-27 15:06:57.434139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.362 #24 NEW cov: 12562 ft: 14355 corp: 8/562b lim: 90 exec/s: 0 rss: 74Mb L: 86/88 MS: 1 InsertRepeatedBytes- 00:07:32.362 [2024-11-27 15:06:57.493769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.362 [2024-11-27 15:06:57.493798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.362 [2024-11-27 15:06:57.493848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.362 [2024-11-27 15:06:57.493864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.362 #25 NEW cov: 12562 ft: 14426 corp: 9/615b lim: 90 exec/s: 0 rss: 74Mb L: 53/88 MS: 1 ChangeByte- 00:07:32.362 [2024-11-27 15:06:57.554056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.362 [2024-11-27 15:06:57.554083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.362 [2024-11-27 15:06:57.554143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.362 [2024-11-27 15:06:57.554159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.362 [2024-11-27 15:06:57.554211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.362 [2024-11-27 15:06:57.554226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.362 #26 NEW cov: 12562 ft: 14738 corp: 10/676b lim: 90 exec/s: 0 rss: 74Mb L: 61/88 MS: 1 CMP- DE: "?\000\000\000\000\000\000\000"- 00:07:32.362 [2024-11-27 15:06:57.594149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.362 [2024-11-27 15:06:57.594176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.362 [2024-11-27 15:06:57.594223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.362 [2024-11-27 15:06:57.594238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.362 [2024-11-27 15:06:57.594292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.362 [2024-11-27 15:06:57.594307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.362 #27 NEW cov: 12562 ft: 14806 corp: 11/730b lim: 90 exec/s: 0 rss: 74Mb L: 54/88 MS: 1 InsertByte- 00:07:32.362 [2024-11-27 15:06:57.634119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.362 [2024-11-27 15:06:57.634146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.362 [2024-11-27 15:06:57.634186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.362 [2024-11-27 15:06:57.634202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.362 #28 NEW cov: 12562 ft: 14833 corp: 12/774b lim: 90 exec/s: 0 rss: 74Mb L: 44/88 MS: 1 EraseBytes- 00:07:32.362 [2024-11-27 15:06:57.674507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.362 [2024-11-27 15:06:57.674534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.362 [2024-11-27 15:06:57.674606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.362 [2024-11-27 15:06:57.674623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.362 [2024-11-27 15:06:57.674684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.362 [2024-11-27 15:06:57.674699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.362 [2024-11-27 15:06:57.674751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:32.362 [2024-11-27 15:06:57.674766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.362 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:32.362 #29 NEW cov: 12585 ft: 14854 corp: 13/862b lim: 90 exec/s: 0 rss: 74Mb L: 88/88 MS: 1 ShuffleBytes- 00:07:32.621 [2024-11-27 15:06:57.714307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.621 [2024-11-27 15:06:57.714334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.621 [2024-11-27 15:06:57.714380] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.621 [2024-11-27 15:06:57.714396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.621 #30 NEW cov: 12585 ft: 14872 corp: 14/911b lim: 90 exec/s: 0 rss: 74Mb L: 49/88 MS: 1 EraseBytes- 00:07:32.621 [2024-11-27 15:06:57.754772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.621 [2024-11-27 15:06:57.754799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.621 [2024-11-27 15:06:57.754864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.621 [2024-11-27 15:06:57.754880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.621 [2024-11-27 15:06:57.754932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.621 [2024-11-27 15:06:57.754948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.621 [2024-11-27 15:06:57.755002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:32.621 [2024-11-27 15:06:57.755019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.621 #31 NEW cov: 12585 ft: 14927 corp: 15/999b lim: 90 exec/s: 0 rss: 74Mb L: 88/88 MS: 1 ChangeBit- 00:07:32.621 [2024-11-27 15:06:57.794438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.621 [2024-11-27 15:06:57.794465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.621 #32 NEW cov: 12585 ft: 15704 corp: 16/1029b lim: 90 exec/s: 32 rss: 74Mb L: 30/88 MS: 1 CrossOver- 00:07:32.621 [2024-11-27 15:06:57.855074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.621 [2024-11-27 15:06:57.855101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.621 [2024-11-27 15:06:57.855150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.621 [2024-11-27 15:06:57.855166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.621 [2024-11-27 15:06:57.855219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.621 [2024-11-27 15:06:57.855235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.621 [2024-11-27 15:06:57.855287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:32.621 [2024-11-27 15:06:57.855302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.621 #33 NEW cov: 12585 ft: 15733 corp: 17/1108b lim: 90 exec/s: 33 rss: 74Mb L: 79/88 MS: 1 EraseBytes- 00:07:32.621 [2024-11-27 15:06:57.895010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.621 [2024-11-27 15:06:57.895037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.622 [2024-11-27 15:06:57.895098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.622 [2024-11-27 15:06:57.895114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.622 [2024-11-27 15:06:57.895168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.622 [2024-11-27 15:06:57.895181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.622 #34 NEW cov: 12585 ft: 15752 corp: 18/1170b lim: 90 exec/s: 34 rss: 74Mb L: 62/88 MS: 1 InsertRepeatedBytes- 00:07:32.622 [2024-11-27 15:06:57.935122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.622 [2024-11-27 15:06:57.935149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.622 [2024-11-27 15:06:57.935201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.622 [2024-11-27 15:06:57.935217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.622 [2024-11-27 15:06:57.935273] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.622 [2024-11-27 15:06:57.935288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.881 #35 NEW cov: 12585 ft: 15764 corp: 19/1232b lim: 90 exec/s: 35 rss: 74Mb L: 62/88 MS: 1 CopyPart- 00:07:32.881 [2024-11-27 15:06:57.995447] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.881 [2024-11-27 15:06:57.995474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.881 [2024-11-27 15:06:57.995541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.881 [2024-11-27 15:06:57.995557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.881 [2024-11-27 15:06:57.995613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.881 [2024-11-27 15:06:57.995629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.881 [2024-11-27 15:06:57.995692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:32.881 [2024-11-27 15:06:57.995706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.881 #36 NEW cov: 12585 ft: 15785 corp: 20/1320b lim: 90 exec/s: 36 rss: 74Mb L: 88/88 MS: 1 ChangeByte- 00:07:32.881 [2024-11-27 15:06:58.035531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.881 [2024-11-27 15:06:58.035557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.881 [2024-11-27 15:06:58.035631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.881 [2024-11-27 15:06:58.035647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.881 [2024-11-27 15:06:58.035708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.881 [2024-11-27 15:06:58.035723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.881 [2024-11-27 15:06:58.035776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:32.881 [2024-11-27 15:06:58.035792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.881 #37 NEW cov: 12585 ft: 15842 corp: 21/1408b lim: 90 exec/s: 37 rss: 74Mb L: 88/88 MS: 1 ChangeByte- 00:07:32.881 [2024-11-27 15:06:58.075678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.881 [2024-11-27 15:06:58.075704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.881 [2024-11-27 15:06:58.075774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.881 [2024-11-27 15:06:58.075790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.881 [2024-11-27 15:06:58.075843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.881 [2024-11-27 15:06:58.075859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.881 [2024-11-27 15:06:58.075914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:32.881 [2024-11-27 15:06:58.075930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.881 #38 NEW cov: 12585 ft: 15849 corp: 22/1487b lim: 90 exec/s: 38 rss: 74Mb L: 79/88 MS: 1 PersAutoDict- DE: "?\000\000\000\000\000\000\000"- 00:07:32.881 [2024-11-27 15:06:58.135909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.881 [2024-11-27 15:06:58.135938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.881 [2024-11-27 15:06:58.135994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.881 [2024-11-27 15:06:58.136010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.881 [2024-11-27 15:06:58.136063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.881 [2024-11-27 15:06:58.136078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.881 [2024-11-27 15:06:58.136131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:32.881 [2024-11-27 15:06:58.136146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.881 #44 NEW cov: 12585 ft: 15918 corp: 23/1572b lim: 90 exec/s: 44 rss: 74Mb L: 85/88 MS: 1 CrossOver- 00:07:32.881 [2024-11-27 15:06:58.175814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.881 [2024-11-27 15:06:58.175840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.881 [2024-11-27 15:06:58.175896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.881 [2024-11-27 15:06:58.175910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.881 [2024-11-27 15:06:58.175962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.881 [2024-11-27 15:06:58.175976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.881 #45 NEW cov: 12585 ft: 15939 corp: 24/1639b lim: 90 exec/s: 45 rss: 74Mb L: 67/88 MS: 1 EraseBytes- 00:07:32.881 [2024-11-27 15:06:58.215811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.881 [2024-11-27 15:06:58.215838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.881 [2024-11-27 15:06:58.215877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.881 [2024-11-27 15:06:58.215893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.141 #47 NEW cov: 12585 ft: 15956 corp: 25/1687b lim: 90 exec/s: 47 rss: 74Mb L: 48/88 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:33.141 [2024-11-27 15:06:58.256193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.141 [2024-11-27 15:06:58.256221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.141 [2024-11-27 15:06:58.256267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.141 [2024-11-27 15:06:58.256282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.141 [2024-11-27 15:06:58.256333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.141 [2024-11-27 15:06:58.256349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.141 [2024-11-27 15:06:58.256410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:33.141 [2024-11-27 15:06:58.256424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.141 #48 NEW cov: 12585 ft: 15986 corp: 26/1767b lim: 90 exec/s: 48 rss: 74Mb L: 80/88 MS: 1 EraseBytes- 00:07:33.141 [2024-11-27 15:06:58.316092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.141 [2024-11-27 15:06:58.316120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.141 [2024-11-27 15:06:58.316157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.141 [2024-11-27 15:06:58.316172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.141 [2024-11-27 15:06:58.376235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.141 [2024-11-27 15:06:58.376262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.141 [2024-11-27 15:06:58.376301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.141 [2024-11-27 15:06:58.376317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.141 #50 NEW cov: 12585 ft: 16058 corp: 27/1816b lim: 90 exec/s: 50 rss: 74Mb L: 49/88 MS: 2 ChangeBit-PersAutoDict- DE: "?\000\000\000\000\000\000\000"- 00:07:33.142 [2024-11-27 15:06:58.416665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.142 [2024-11-27 15:06:58.416692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.142 [2024-11-27 15:06:58.416741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.142 [2024-11-27 15:06:58.416756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.142 [2024-11-27 15:06:58.416809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.142 [2024-11-27 15:06:58.416824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.142 [2024-11-27 15:06:58.416876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:33.142 [2024-11-27 15:06:58.416891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.142 #51 NEW cov: 12585 ft: 16070 corp: 28/1894b lim: 90 exec/s: 51 rss: 74Mb L: 78/88 MS: 1 EraseBytes- 00:07:33.142 [2024-11-27 15:06:58.456454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.142 [2024-11-27 15:06:58.456480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.142 [2024-11-27 15:06:58.456518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.142 [2024-11-27 15:06:58.456533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.401 #52 NEW cov: 12585 ft: 16084 corp: 29/1947b lim: 90 exec/s: 52 rss: 75Mb L: 53/88 MS: 1 ChangeBinInt- 00:07:33.401 [2024-11-27 15:06:58.516942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.401 [2024-11-27 15:06:58.516970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.401 [2024-11-27 15:06:58.517019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.401 [2024-11-27 15:06:58.517035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.401 [2024-11-27 15:06:58.517087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.401 [2024-11-27 15:06:58.517102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.401 [2024-11-27 15:06:58.517156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:33.401 [2024-11-27 15:06:58.517172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.401 #53 NEW cov: 12585 ft: 16090 corp: 30/2032b lim: 90 exec/s: 53 rss: 75Mb L: 85/88 MS: 1 ShuffleBytes- 00:07:33.401 [2024-11-27 15:06:58.576786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.401 [2024-11-27 15:06:58.576814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.401 [2024-11-27 15:06:58.576878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.401 [2024-11-27 15:06:58.576895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.401 #54 NEW cov: 12585 ft: 16128 corp: 31/2077b lim: 90 exec/s: 54 rss: 75Mb L: 45/88 MS: 1 EraseBytes- 00:07:33.401 [2024-11-27 15:06:58.637296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.401 [2024-11-27 15:06:58.637323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.401 [2024-11-27 15:06:58.637375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.401 [2024-11-27 15:06:58.637390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.401 [2024-11-27 15:06:58.637441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.401 [2024-11-27 15:06:58.637456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.401 [2024-11-27 15:06:58.637510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:33.401 [2024-11-27 15:06:58.637525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.401 #55 NEW cov: 12585 ft: 16131 corp: 32/2161b lim: 90 exec/s: 55 rss: 75Mb L: 84/88 MS: 1 InsertRepeatedBytes- 00:07:33.401 [2024-11-27 15:06:58.697431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.401 [2024-11-27 15:06:58.697458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.401 [2024-11-27 15:06:58.697510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.401 [2024-11-27 15:06:58.697526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.401 [2024-11-27 15:06:58.697579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.401 [2024-11-27 15:06:58.697594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.401 [2024-11-27 15:06:58.697653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:33.401 [2024-11-27 15:06:58.697667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.401 #56 NEW cov: 12585 ft: 16171 corp: 33/2242b lim: 90 exec/s: 56 rss: 75Mb L: 81/88 MS: 1 CrossOver- 00:07:33.661 [2024-11-27 15:06:58.757413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.661 [2024-11-27 15:06:58.757440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.661 [2024-11-27 15:06:58.757481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.661 [2024-11-27 15:06:58.757497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.661 [2024-11-27 15:06:58.757550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.661 [2024-11-27 15:06:58.757566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.661 #57 NEW cov: 12585 ft: 16206 corp: 34/2304b lim: 90 exec/s: 57 rss: 75Mb L: 62/88 MS: 1 ChangeByte- 00:07:33.661 [2024-11-27 15:06:58.797754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.661 [2024-11-27 15:06:58.797783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.661 [2024-11-27 15:06:58.797834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.661 [2024-11-27 15:06:58.797849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.661 [2024-11-27 15:06:58.797899] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.661 [2024-11-27 15:06:58.797913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.661 [2024-11-27 15:06:58.797964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:33.661 [2024-11-27 15:06:58.797979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.661 #58 NEW cov: 12585 ft: 16223 corp: 35/2392b lim: 90 exec/s: 29 rss: 75Mb L: 88/88 MS: 1 InsertRepeatedBytes- 00:07:33.661 #58 DONE cov: 12585 ft: 16223 corp: 35/2392b lim: 90 exec/s: 29 rss: 75Mb 00:07:33.661 ###### Recommended dictionary. ###### 00:07:33.661 "?\000\000\000\000\000\000\000" # Uses: 2 00:07:33.661 ###### End of recommended dictionary. ###### 00:07:33.661 Done 58 runs in 2 second(s) 00:07:33.661 15:06:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:33.661 15:06:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:33.661 15:06:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:33.661 15:06:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:33.661 15:06:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:33.661 15:06:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:33.661 15:06:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:33.661 15:06:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:33.661 15:06:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:33.661 15:06:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:33.661 15:06:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:33.661 15:06:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:33.661 15:06:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:33.661 15:06:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:33.661 15:06:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:33.661 15:06:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:33.661 15:06:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:33.661 15:06:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:33.662 15:06:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:33.662 [2024-11-27 15:06:58.949306] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:33.662 [2024-11-27 15:06:58.949362] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2374586 ] 00:07:33.921 [2024-11-27 15:06:59.136295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.921 [2024-11-27 15:06:59.169208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.921 [2024-11-27 15:06:59.228061] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.921 [2024-11-27 15:06:59.244424] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:33.921 INFO: Running with entropic power schedule (0xFF, 100). 00:07:33.921 INFO: Seed: 712903398 00:07:34.180 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:07:34.180 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:07:34.180 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:34.180 INFO: A corpus is not provided, starting from an empty corpus 00:07:34.180 #2 INITED exec/s: 0 rss: 65Mb 00:07:34.180 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:34.180 This may also happen if the target rejected all inputs we tried so far 00:07:34.180 [2024-11-27 15:06:59.293234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.180 [2024-11-27 15:06:59.293265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.180 [2024-11-27 15:06:59.293322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.180 [2024-11-27 15:06:59.293337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.180 [2024-11-27 15:06:59.293390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.180 [2024-11-27 15:06:59.293406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.439 NEW_FUNC[1/718]: 0x461128 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:34.439 NEW_FUNC[2/718]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:34.439 #23 NEW cov: 12317 ft: 12316 corp: 2/37b lim: 50 exec/s: 0 rss: 73Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:07:34.439 [2024-11-27 15:06:59.634181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.439 [2024-11-27 15:06:59.634215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.439 [2024-11-27 15:06:59.634262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.439 [2024-11-27 15:06:59.634278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.439 [2024-11-27 15:06:59.634331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.439 [2024-11-27 15:06:59.634345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.439 #24 NEW cov: 12446 ft: 13006 corp: 3/74b lim: 50 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 InsertByte- 00:07:34.439 [2024-11-27 15:06:59.694448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.439 [2024-11-27 15:06:59.694476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.439 [2024-11-27 15:06:59.694543] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.439 [2024-11-27 15:06:59.694559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.439 [2024-11-27 15:06:59.694616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.439 [2024-11-27 15:06:59.694632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.439 [2024-11-27 15:06:59.694697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:34.439 [2024-11-27 15:06:59.694712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.439 #25 NEW cov: 12452 ft: 13561 corp: 4/120b lim: 50 exec/s: 0 rss: 73Mb L: 46/46 MS: 1 CopyPart- 00:07:34.439 [2024-11-27 15:06:59.734344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.439 [2024-11-27 15:06:59.734370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.439 [2024-11-27 15:06:59.734419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.439 [2024-11-27 15:06:59.734435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.439 [2024-11-27 15:06:59.734489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.439 [2024-11-27 15:06:59.734504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.439 #26 NEW cov: 12537 ft: 13749 corp: 5/151b lim: 50 exec/s: 0 rss: 74Mb L: 31/46 MS: 1 EraseBytes- 00:07:34.699 [2024-11-27 15:06:59.794549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.699 [2024-11-27 15:06:59.794575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.699 [2024-11-27 15:06:59.794646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.699 [2024-11-27 15:06:59.794662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.699 [2024-11-27 15:06:59.794719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.699 [2024-11-27 15:06:59.794735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.699 #27 NEW cov: 12537 ft: 13892 corp: 6/190b lim: 50 exec/s: 0 rss: 74Mb L: 39/46 MS: 1 CMP- DE: "\001\037"- 00:07:34.699 [2024-11-27 15:06:59.854874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.699 [2024-11-27 15:06:59.854901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.699 [2024-11-27 15:06:59.854965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.699 [2024-11-27 15:06:59.854981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.699 [2024-11-27 15:06:59.855032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.699 [2024-11-27 15:06:59.855046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.699 [2024-11-27 15:06:59.855103] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:34.699 [2024-11-27 15:06:59.855120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.699 #29 NEW cov: 12537 ft: 14034 corp: 7/230b lim: 50 exec/s: 0 rss: 74Mb L: 40/46 MS: 2 CMP-InsertRepeatedBytes- DE: "\000\000\000\007"- 00:07:34.699 [2024-11-27 15:06:59.894956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.699 [2024-11-27 15:06:59.894982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.699 [2024-11-27 15:06:59.895051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.699 [2024-11-27 15:06:59.895066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.699 [2024-11-27 15:06:59.895119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.699 [2024-11-27 15:06:59.895134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.699 [2024-11-27 15:06:59.895187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:34.699 [2024-11-27 15:06:59.895204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.699 #30 NEW cov: 12537 ft: 14088 corp: 8/270b lim: 50 exec/s: 0 rss: 74Mb L: 40/46 MS: 1 PersAutoDict- DE: "\000\000\000\007"- 00:07:34.699 [2024-11-27 15:06:59.935063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.699 [2024-11-27 15:06:59.935090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.699 [2024-11-27 15:06:59.935141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.699 [2024-11-27 15:06:59.935157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.699 [2024-11-27 15:06:59.935208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.699 [2024-11-27 15:06:59.935223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.699 [2024-11-27 15:06:59.935275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:34.699 [2024-11-27 15:06:59.935291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.699 #31 NEW cov: 12537 ft: 14136 corp: 9/319b lim: 50 exec/s: 0 rss: 74Mb L: 49/49 MS: 1 CrossOver- 00:07:34.699 [2024-11-27 15:06:59.974913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.699 [2024-11-27 15:06:59.974940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.699 [2024-11-27 15:06:59.974995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.699 [2024-11-27 15:06:59.975011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.699 #32 NEW cov: 12537 ft: 14494 corp: 10/348b lim: 50 exec/s: 0 rss: 74Mb L: 29/49 MS: 1 EraseBytes- 00:07:34.699 [2024-11-27 15:07:00.015610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.699 [2024-11-27 15:07:00.015641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.699 [2024-11-27 15:07:00.015700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.699 [2024-11-27 15:07:00.015715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.699 [2024-11-27 15:07:00.015768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.699 [2024-11-27 15:07:00.015783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.699 [2024-11-27 15:07:00.015837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:34.699 [2024-11-27 15:07:00.015853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.959 #33 NEW cov: 12537 ft: 14622 corp: 11/390b lim: 50 exec/s: 0 rss: 74Mb L: 42/49 MS: 1 CopyPart- 00:07:34.959 [2024-11-27 15:07:00.075406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.959 [2024-11-27 15:07:00.075436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.959 [2024-11-27 15:07:00.075480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.959 [2024-11-27 15:07:00.075496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.959 [2024-11-27 15:07:00.075548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.959 [2024-11-27 15:07:00.075563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.959 #34 NEW cov: 12537 ft: 14676 corp: 12/429b lim: 50 exec/s: 0 rss: 74Mb L: 39/49 MS: 1 PersAutoDict- DE: "\000\000\000\007"- 00:07:34.959 [2024-11-27 15:07:00.115613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.959 [2024-11-27 15:07:00.115643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.959 [2024-11-27 15:07:00.115691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.959 [2024-11-27 15:07:00.115709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.959 [2024-11-27 15:07:00.115762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.959 [2024-11-27 15:07:00.115778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.959 [2024-11-27 15:07:00.115835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:34.959 [2024-11-27 15:07:00.115850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.959 #35 NEW cov: 12537 ft: 14705 corp: 13/471b lim: 50 exec/s: 0 rss: 74Mb L: 42/49 MS: 1 ChangeBit- 00:07:34.959 [2024-11-27 15:07:00.175777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.959 [2024-11-27 15:07:00.175805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.959 [2024-11-27 15:07:00.175872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.959 [2024-11-27 15:07:00.175888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.959 [2024-11-27 15:07:00.175941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.959 [2024-11-27 15:07:00.175958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.959 [2024-11-27 15:07:00.176013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:34.959 [2024-11-27 15:07:00.176030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.959 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:34.959 #36 NEW cov: 12560 ft: 14754 corp: 14/520b lim: 50 exec/s: 0 rss: 74Mb L: 49/49 MS: 1 ChangeByte- 00:07:34.959 [2024-11-27 15:07:00.235924] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.959 [2024-11-27 15:07:00.235952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.959 [2024-11-27 15:07:00.236006] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.959 [2024-11-27 15:07:00.236022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.959 [2024-11-27 15:07:00.236074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.959 [2024-11-27 15:07:00.236089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.959 [2024-11-27 15:07:00.236141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:34.959 [2024-11-27 15:07:00.236157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.959 #37 NEW cov: 12560 ft: 14811 corp: 15/561b lim: 50 exec/s: 37 rss: 74Mb L: 41/49 MS: 1 CopyPart- 00:07:34.959 [2024-11-27 15:07:00.296075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.959 [2024-11-27 15:07:00.296102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.959 [2024-11-27 15:07:00.296156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.959 [2024-11-27 15:07:00.296172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.959 [2024-11-27 15:07:00.296223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.959 [2024-11-27 15:07:00.296239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.959 [2024-11-27 15:07:00.296292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:34.959 [2024-11-27 15:07:00.296307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.218 #41 NEW cov: 12560 ft: 14835 corp: 16/607b lim: 50 exec/s: 41 rss: 74Mb L: 46/49 MS: 4 ShuffleBytes-ShuffleBytes-ChangeBinInt-InsertRepeatedBytes- 00:07:35.218 [2024-11-27 15:07:00.336029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.218 [2024-11-27 15:07:00.336056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.219 [2024-11-27 15:07:00.336114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.219 [2024-11-27 15:07:00.336129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.219 [2024-11-27 15:07:00.336181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.219 [2024-11-27 15:07:00.336197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.219 #42 NEW cov: 12560 ft: 14852 corp: 17/639b lim: 50 exec/s: 42 rss: 74Mb L: 32/49 MS: 1 InsertByte- 00:07:35.219 [2024-11-27 15:07:00.376466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.219 [2024-11-27 15:07:00.376494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.219 [2024-11-27 15:07:00.376565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.219 [2024-11-27 15:07:00.376582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.219 [2024-11-27 15:07:00.376640] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.219 [2024-11-27 15:07:00.376655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.219 [2024-11-27 15:07:00.376707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:35.219 [2024-11-27 15:07:00.376723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.219 [2024-11-27 15:07:00.376777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:35.219 [2024-11-27 15:07:00.376792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:35.219 #43 NEW cov: 12560 ft: 14928 corp: 18/689b lim: 50 exec/s: 43 rss: 74Mb L: 50/50 MS: 1 InsertByte- 00:07:35.219 [2024-11-27 15:07:00.436492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.219 [2024-11-27 15:07:00.436520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.219 [2024-11-27 15:07:00.436567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.219 [2024-11-27 15:07:00.436582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.219 [2024-11-27 15:07:00.436640] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.219 [2024-11-27 15:07:00.436655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.219 [2024-11-27 15:07:00.436709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:35.219 [2024-11-27 15:07:00.436725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.219 #44 NEW cov: 12560 ft: 14943 corp: 19/732b lim: 50 exec/s: 44 rss: 74Mb L: 43/50 MS: 1 PersAutoDict- DE: "\000\000\000\007"- 00:07:35.219 [2024-11-27 15:07:00.496648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.219 [2024-11-27 15:07:00.496676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.219 [2024-11-27 15:07:00.496739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.219 [2024-11-27 15:07:00.496755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.219 [2024-11-27 15:07:00.496807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.219 [2024-11-27 15:07:00.496823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.219 [2024-11-27 15:07:00.496878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:35.219 [2024-11-27 15:07:00.496893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.219 #45 NEW cov: 12560 ft: 14979 corp: 20/778b lim: 50 exec/s: 45 rss: 75Mb L: 46/50 MS: 1 ChangeByte- 00:07:35.219 [2024-11-27 15:07:00.556831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.219 [2024-11-27 15:07:00.556858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.219 [2024-11-27 15:07:00.556914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.219 [2024-11-27 15:07:00.556927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.219 [2024-11-27 15:07:00.556982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.219 [2024-11-27 15:07:00.556998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.219 [2024-11-27 15:07:00.557051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:35.219 [2024-11-27 15:07:00.557066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.479 #46 NEW cov: 12560 ft: 15026 corp: 21/827b lim: 50 exec/s: 46 rss: 75Mb L: 49/50 MS: 1 CrossOver- 00:07:35.479 [2024-11-27 15:07:00.617019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.479 [2024-11-27 15:07:00.617046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.479 [2024-11-27 15:07:00.617092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.479 [2024-11-27 15:07:00.617108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.479 [2024-11-27 15:07:00.617162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.479 [2024-11-27 15:07:00.617177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.479 [2024-11-27 15:07:00.617229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:35.479 [2024-11-27 15:07:00.617244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.479 #47 NEW cov: 12560 ft: 15043 corp: 22/871b lim: 50 exec/s: 47 rss: 75Mb L: 44/50 MS: 1 InsertByte- 00:07:35.479 [2024-11-27 15:07:00.676653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.479 [2024-11-27 15:07:00.676680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.479 #48 NEW cov: 12560 ft: 15804 corp: 23/887b lim: 50 exec/s: 48 rss: 75Mb L: 16/50 MS: 1 CrossOver- 00:07:35.479 [2024-11-27 15:07:00.717238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.479 [2024-11-27 15:07:00.717264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.479 [2024-11-27 15:07:00.717318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.479 [2024-11-27 15:07:00.717333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.479 [2024-11-27 15:07:00.717384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.479 [2024-11-27 15:07:00.717400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.479 [2024-11-27 15:07:00.717454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:35.479 [2024-11-27 15:07:00.717472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.479 #49 NEW cov: 12560 ft: 15825 corp: 24/933b lim: 50 exec/s: 49 rss: 75Mb L: 46/50 MS: 1 ChangeBinInt- 00:07:35.479 [2024-11-27 15:07:00.757361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.479 [2024-11-27 15:07:00.757387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.479 [2024-11-27 15:07:00.757440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.479 [2024-11-27 15:07:00.757455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.479 [2024-11-27 15:07:00.757508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.479 [2024-11-27 15:07:00.757521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.479 [2024-11-27 15:07:00.757573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:35.479 [2024-11-27 15:07:00.757588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.479 #50 NEW cov: 12560 ft: 15844 corp: 25/979b lim: 50 exec/s: 50 rss: 75Mb L: 46/50 MS: 1 CrossOver- 00:07:35.479 [2024-11-27 15:07:00.817758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.479 [2024-11-27 15:07:00.817785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.479 [2024-11-27 15:07:00.817846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.479 [2024-11-27 15:07:00.817861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.479 [2024-11-27 15:07:00.817914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.479 [2024-11-27 15:07:00.817929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.479 [2024-11-27 15:07:00.817983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:35.479 [2024-11-27 15:07:00.817999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.740 [2024-11-27 15:07:00.818054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:35.740 [2024-11-27 15:07:00.818069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:35.740 #51 NEW cov: 12560 ft: 15856 corp: 26/1029b lim: 50 exec/s: 51 rss: 75Mb L: 50/50 MS: 1 CopyPart- 00:07:35.740 [2024-11-27 15:07:00.857330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.740 [2024-11-27 15:07:00.857356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.740 [2024-11-27 15:07:00.857411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.740 [2024-11-27 15:07:00.857427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.740 #52 NEW cov: 12560 ft: 15863 corp: 27/1058b lim: 50 exec/s: 52 rss: 75Mb L: 29/50 MS: 1 ShuffleBytes- 00:07:35.740 [2024-11-27 15:07:00.917341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.740 [2024-11-27 15:07:00.917367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.740 #57 NEW cov: 12560 ft: 15864 corp: 28/1068b lim: 50 exec/s: 57 rss: 75Mb L: 10/50 MS: 5 InsertByte-EraseBytes-CMP-CrossOver-InsertByte- DE: "\377\221u\262mH/b"- 00:07:35.740 [2024-11-27 15:07:00.957948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.740 [2024-11-27 15:07:00.957976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.740 [2024-11-27 15:07:00.958042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.740 [2024-11-27 15:07:00.958058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.740 [2024-11-27 15:07:00.958112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.740 [2024-11-27 15:07:00.958128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.740 [2024-11-27 15:07:00.958182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:35.740 [2024-11-27 15:07:00.958198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.740 #58 NEW cov: 12560 ft: 15875 corp: 29/1110b lim: 50 exec/s: 58 rss: 75Mb L: 42/50 MS: 1 CrossOver- 00:07:35.740 [2024-11-27 15:07:00.997943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.740 [2024-11-27 15:07:00.997968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.740 [2024-11-27 15:07:00.998023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.740 [2024-11-27 15:07:00.998039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.740 [2024-11-27 15:07:00.998096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.740 [2024-11-27 15:07:00.998112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.740 #59 NEW cov: 12560 ft: 15895 corp: 30/1149b lim: 50 exec/s: 59 rss: 75Mb L: 39/50 MS: 1 ShuffleBytes- 00:07:35.740 [2024-11-27 15:07:01.037860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.740 [2024-11-27 15:07:01.037886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.740 [2024-11-27 15:07:01.037925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.740 [2024-11-27 15:07:01.037939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.051 #60 NEW cov: 12560 ft: 15902 corp: 31/1173b lim: 50 exec/s: 60 rss: 75Mb L: 24/50 MS: 1 EraseBytes- 00:07:36.051 [2024-11-27 15:07:01.098351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.051 [2024-11-27 15:07:01.098378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.051 [2024-11-27 15:07:01.098431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.051 [2024-11-27 15:07:01.098448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.051 [2024-11-27 15:07:01.098502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.051 [2024-11-27 15:07:01.098517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.051 [2024-11-27 15:07:01.098571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:36.051 [2024-11-27 15:07:01.098591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.051 #61 NEW cov: 12560 ft: 15937 corp: 32/1221b lim: 50 exec/s: 61 rss: 75Mb L: 48/50 MS: 1 InsertRepeatedBytes- 00:07:36.051 [2024-11-27 15:07:01.138648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.051 [2024-11-27 15:07:01.138674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.051 [2024-11-27 15:07:01.138729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.051 [2024-11-27 15:07:01.138744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.051 [2024-11-27 15:07:01.138796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.051 [2024-11-27 15:07:01.138811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.051 [2024-11-27 15:07:01.138863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:36.051 [2024-11-27 15:07:01.138878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.051 [2024-11-27 15:07:01.138932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:36.051 [2024-11-27 15:07:01.138947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:36.051 #62 NEW cov: 12560 ft: 15992 corp: 33/1271b lim: 50 exec/s: 62 rss: 75Mb L: 50/50 MS: 1 ChangeBit- 00:07:36.051 [2024-11-27 15:07:01.198634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.051 [2024-11-27 15:07:01.198661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.051 [2024-11-27 15:07:01.198736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.051 [2024-11-27 15:07:01.198751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.051 [2024-11-27 15:07:01.198805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.051 [2024-11-27 15:07:01.198819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.051 [2024-11-27 15:07:01.198871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:36.051 [2024-11-27 15:07:01.198886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.051 #63 NEW cov: 12560 ft: 15994 corp: 34/1320b lim: 50 exec/s: 63 rss: 75Mb L: 49/50 MS: 1 PersAutoDict- DE: "\001\037"- 00:07:36.051 [2024-11-27 15:07:01.238763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.051 [2024-11-27 15:07:01.238789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.051 [2024-11-27 15:07:01.238860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.051 [2024-11-27 15:07:01.238877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.051 [2024-11-27 15:07:01.238931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.051 [2024-11-27 15:07:01.238944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.051 [2024-11-27 15:07:01.239000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:36.051 [2024-11-27 15:07:01.239015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.051 #64 pulse cov: 12560 ft: 16025 corp: 34/1320b lim: 50 exec/s: 32 rss: 75Mb 00:07:36.051 #64 NEW cov: 12560 ft: 16025 corp: 35/1366b lim: 50 exec/s: 32 rss: 75Mb L: 46/50 MS: 1 CrossOver- 00:07:36.051 #64 DONE cov: 12560 ft: 16025 corp: 35/1366b lim: 50 exec/s: 32 rss: 75Mb 00:07:36.051 ###### Recommended dictionary. ###### 00:07:36.051 "\001\037" # Uses: 1 00:07:36.051 "\000\000\000\007" # Uses: 3 00:07:36.051 "\377\221u\262mH/b" # Uses: 0 00:07:36.051 ###### End of recommended dictionary. ###### 00:07:36.051 Done 64 runs in 2 second(s) 00:07:36.322 15:07:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:36.322 15:07:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:36.322 15:07:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:36.322 15:07:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:36.322 15:07:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:36.322 15:07:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:36.322 15:07:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:36.322 15:07:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:36.322 15:07:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:36.322 15:07:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:36.322 15:07:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:36.322 15:07:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:36.322 15:07:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:36.322 15:07:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:36.322 15:07:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:36.322 15:07:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:36.322 15:07:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:36.322 15:07:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:36.322 15:07:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:36.322 [2024-11-27 15:07:01.431241] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:36.322 [2024-11-27 15:07:01.431311] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2374933 ] 00:07:36.322 [2024-11-27 15:07:01.614150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.322 [2024-11-27 15:07:01.647720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.581 [2024-11-27 15:07:01.707229] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.581 [2024-11-27 15:07:01.723589] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:36.581 INFO: Running with entropic power schedule (0xFF, 100). 00:07:36.581 INFO: Seed: 3193923184 00:07:36.581 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:07:36.581 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:07:36.581 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:36.581 INFO: A corpus is not provided, starting from an empty corpus 00:07:36.581 #2 INITED exec/s: 0 rss: 65Mb 00:07:36.581 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:36.581 This may also happen if the target rejected all inputs we tried so far 00:07:36.581 [2024-11-27 15:07:01.799936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.581 [2024-11-27 15:07:01.799977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.581 [2024-11-27 15:07:01.800119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:36.581 [2024-11-27 15:07:01.800144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.839 NEW_FUNC[1/718]: 0x4633f8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:36.839 NEW_FUNC[2/718]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:36.839 #14 NEW cov: 12359 ft: 12345 corp: 2/38b lim: 85 exec/s: 0 rss: 73Mb L: 37/37 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:36.839 [2024-11-27 15:07:02.151300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:36.839 [2024-11-27 15:07:02.151355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.839 [2024-11-27 15:07:02.151487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:36.839 [2024-11-27 15:07:02.151512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.839 [2024-11-27 15:07:02.151649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:36.840 [2024-11-27 15:07:02.151679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.840 [2024-11-27 15:07:02.151811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:36.840 [2024-11-27 15:07:02.151840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:37.098 #16 NEW cov: 12472 ft: 13553 corp: 3/117b lim: 85 exec/s: 0 rss: 73Mb L: 79/79 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:37.098 [2024-11-27 15:07:02.211248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.098 [2024-11-27 15:07:02.211282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.098 [2024-11-27 15:07:02.211386] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.098 [2024-11-27 15:07:02.211408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.098 [2024-11-27 15:07:02.211525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:37.098 [2024-11-27 15:07:02.211549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.098 [2024-11-27 15:07:02.211679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:37.098 [2024-11-27 15:07:02.211703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:37.098 #23 NEW cov: 12478 ft: 13831 corp: 4/196b lim: 85 exec/s: 0 rss: 73Mb L: 79/79 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:37.098 [2024-11-27 15:07:02.261416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.098 [2024-11-27 15:07:02.261452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.098 [2024-11-27 15:07:02.261557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.098 [2024-11-27 15:07:02.261578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.098 [2024-11-27 15:07:02.261712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:37.098 [2024-11-27 15:07:02.261736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.098 [2024-11-27 15:07:02.261849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:37.098 [2024-11-27 15:07:02.261873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:37.098 #24 NEW cov: 12563 ft: 14089 corp: 5/278b lim: 85 exec/s: 0 rss: 73Mb L: 82/82 MS: 1 InsertRepeatedBytes- 00:07:37.098 [2024-11-27 15:07:02.321043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.098 [2024-11-27 15:07:02.321076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.098 [2024-11-27 15:07:02.321202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.098 [2024-11-27 15:07:02.321224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.098 #26 NEW cov: 12563 ft: 14258 corp: 6/322b lim: 85 exec/s: 0 rss: 73Mb L: 44/82 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:37.098 [2024-11-27 15:07:02.371880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.098 [2024-11-27 15:07:02.371914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.098 [2024-11-27 15:07:02.372010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.098 [2024-11-27 15:07:02.372033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.098 [2024-11-27 15:07:02.372152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:37.098 [2024-11-27 15:07:02.372176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.098 [2024-11-27 15:07:02.372297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:37.098 [2024-11-27 15:07:02.372318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:37.098 #27 NEW cov: 12563 ft: 14373 corp: 7/401b lim: 85 exec/s: 0 rss: 73Mb L: 79/82 MS: 1 ChangeBinInt- 00:07:37.098 [2024-11-27 15:07:02.421470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.098 [2024-11-27 15:07:02.421496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.098 [2024-11-27 15:07:02.421623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.098 [2024-11-27 15:07:02.421661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.356 #28 NEW cov: 12563 ft: 14423 corp: 8/445b lim: 85 exec/s: 0 rss: 73Mb L: 44/82 MS: 1 ChangeByte- 00:07:37.356 [2024-11-27 15:07:02.492161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.356 [2024-11-27 15:07:02.492193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.356 [2024-11-27 15:07:02.492315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.356 [2024-11-27 15:07:02.492341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.356 [2024-11-27 15:07:02.492467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:37.356 [2024-11-27 15:07:02.492487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.356 [2024-11-27 15:07:02.492612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:37.356 [2024-11-27 15:07:02.492649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:37.356 #29 NEW cov: 12563 ft: 14463 corp: 9/524b lim: 85 exec/s: 0 rss: 73Mb L: 79/82 MS: 1 ShuffleBytes- 00:07:37.356 [2024-11-27 15:07:02.551789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.356 [2024-11-27 15:07:02.551822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.356 [2024-11-27 15:07:02.551941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.356 [2024-11-27 15:07:02.551959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.356 #30 NEW cov: 12563 ft: 14545 corp: 10/569b lim: 85 exec/s: 0 rss: 73Mb L: 45/82 MS: 1 InsertByte- 00:07:37.356 [2024-11-27 15:07:02.601871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.356 [2024-11-27 15:07:02.601905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.356 [2024-11-27 15:07:02.602016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.356 [2024-11-27 15:07:02.602041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.356 #32 NEW cov: 12563 ft: 14576 corp: 11/614b lim: 85 exec/s: 0 rss: 73Mb L: 45/82 MS: 2 ChangeByte-CrossOver- 00:07:37.356 [2024-11-27 15:07:02.651815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.356 [2024-11-27 15:07:02.651841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.615 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:37.615 #33 NEW cov: 12586 ft: 15358 corp: 12/640b lim: 85 exec/s: 0 rss: 74Mb L: 26/82 MS: 1 EraseBytes- 00:07:37.615 [2024-11-27 15:07:02.722389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.615 [2024-11-27 15:07:02.722422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.615 [2024-11-27 15:07:02.722544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.615 [2024-11-27 15:07:02.722567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.615 #34 NEW cov: 12586 ft: 15395 corp: 13/684b lim: 85 exec/s: 0 rss: 74Mb L: 44/82 MS: 1 ChangeByte- 00:07:37.615 [2024-11-27 15:07:02.793060] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.615 [2024-11-27 15:07:02.793094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.615 [2024-11-27 15:07:02.793175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.615 [2024-11-27 15:07:02.793202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.615 [2024-11-27 15:07:02.793318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:37.616 [2024-11-27 15:07:02.793345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.616 [2024-11-27 15:07:02.793462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:37.616 [2024-11-27 15:07:02.793485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:37.616 #35 NEW cov: 12586 ft: 15408 corp: 14/763b lim: 85 exec/s: 35 rss: 74Mb L: 79/82 MS: 1 ChangeByte- 00:07:37.616 [2024-11-27 15:07:02.843248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.616 [2024-11-27 15:07:02.843283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.616 [2024-11-27 15:07:02.843396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.616 [2024-11-27 15:07:02.843418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.616 [2024-11-27 15:07:02.843531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:37.616 [2024-11-27 15:07:02.843552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.616 [2024-11-27 15:07:02.843668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:37.616 [2024-11-27 15:07:02.843692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:37.616 #36 NEW cov: 12586 ft: 15423 corp: 15/842b lim: 85 exec/s: 36 rss: 74Mb L: 79/82 MS: 1 ChangeBinInt- 00:07:37.616 [2024-11-27 15:07:02.913679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.616 [2024-11-27 15:07:02.913711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.616 [2024-11-27 15:07:02.913801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.616 [2024-11-27 15:07:02.913826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.616 [2024-11-27 15:07:02.913938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:37.616 [2024-11-27 15:07:02.913959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.616 [2024-11-27 15:07:02.914071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:37.616 [2024-11-27 15:07:02.914089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:37.616 [2024-11-27 15:07:02.914211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:37.616 [2024-11-27 15:07:02.914235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:37.882 #37 NEW cov: 12586 ft: 15506 corp: 16/927b lim: 85 exec/s: 37 rss: 74Mb L: 85/85 MS: 1 InsertRepeatedBytes- 00:07:37.882 [2024-11-27 15:07:02.983127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.882 [2024-11-27 15:07:02.983158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.882 [2024-11-27 15:07:02.983302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.882 [2024-11-27 15:07:02.983324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.882 #41 NEW cov: 12586 ft: 15536 corp: 17/962b lim: 85 exec/s: 41 rss: 74Mb L: 35/85 MS: 4 ChangeBit-CrossOver-ChangeByte-CrossOver- 00:07:37.882 [2024-11-27 15:07:03.033743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.882 [2024-11-27 15:07:03.033777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.882 [2024-11-27 15:07:03.033868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.882 [2024-11-27 15:07:03.033889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.882 [2024-11-27 15:07:03.034012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:37.882 [2024-11-27 15:07:03.034033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.882 [2024-11-27 15:07:03.034152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:37.882 [2024-11-27 15:07:03.034174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:37.882 #42 NEW cov: 12586 ft: 15571 corp: 18/1041b lim: 85 exec/s: 42 rss: 74Mb L: 79/85 MS: 1 ShuffleBytes- 00:07:37.882 [2024-11-27 15:07:03.073932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.882 [2024-11-27 15:07:03.073960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.882 [2024-11-27 15:07:03.074037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.882 [2024-11-27 15:07:03.074060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.882 [2024-11-27 15:07:03.074179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:37.882 [2024-11-27 15:07:03.074201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.882 [2024-11-27 15:07:03.074314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:37.882 [2024-11-27 15:07:03.074334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:37.882 #45 NEW cov: 12586 ft: 15598 corp: 19/1125b lim: 85 exec/s: 45 rss: 74Mb L: 84/85 MS: 3 CopyPart-CopyPart-InsertRepeatedBytes- 00:07:37.882 [2024-11-27 15:07:03.123980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.882 [2024-11-27 15:07:03.124009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.882 [2024-11-27 15:07:03.124097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.882 [2024-11-27 15:07:03.124116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.882 [2024-11-27 15:07:03.124230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:37.882 [2024-11-27 15:07:03.124252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.882 [2024-11-27 15:07:03.124371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:37.882 [2024-11-27 15:07:03.124395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:37.882 #46 NEW cov: 12586 ft: 15678 corp: 20/1207b lim: 85 exec/s: 46 rss: 74Mb L: 82/85 MS: 1 CopyPart- 00:07:37.882 [2024-11-27 15:07:03.194210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.882 [2024-11-27 15:07:03.194241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.882 [2024-11-27 15:07:03.194312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.882 [2024-11-27 15:07:03.194334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.882 [2024-11-27 15:07:03.194446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:37.882 [2024-11-27 15:07:03.194475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.882 [2024-11-27 15:07:03.194595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:37.882 [2024-11-27 15:07:03.194624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.142 #47 NEW cov: 12586 ft: 15693 corp: 21/1280b lim: 85 exec/s: 47 rss: 74Mb L: 73/85 MS: 1 EraseBytes- 00:07:38.142 [2024-11-27 15:07:03.264440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.142 [2024-11-27 15:07:03.264470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.142 [2024-11-27 15:07:03.264539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.142 [2024-11-27 15:07:03.264559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.142 [2024-11-27 15:07:03.264683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:38.142 [2024-11-27 15:07:03.264706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.142 [2024-11-27 15:07:03.264840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:38.142 [2024-11-27 15:07:03.264864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.142 #48 NEW cov: 12586 ft: 15722 corp: 22/1358b lim: 85 exec/s: 48 rss: 74Mb L: 78/85 MS: 1 EraseBytes- 00:07:38.142 [2024-11-27 15:07:03.334682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.142 [2024-11-27 15:07:03.334715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.142 [2024-11-27 15:07:03.334797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.142 [2024-11-27 15:07:03.334819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.142 [2024-11-27 15:07:03.334936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:38.142 [2024-11-27 15:07:03.334958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.142 [2024-11-27 15:07:03.335082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:38.142 [2024-11-27 15:07:03.335105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.142 #49 NEW cov: 12586 ft: 15743 corp: 23/1437b lim: 85 exec/s: 49 rss: 74Mb L: 79/85 MS: 1 ShuffleBytes- 00:07:38.142 [2024-11-27 15:07:03.384312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.142 [2024-11-27 15:07:03.384345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.142 [2024-11-27 15:07:03.384452] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.142 [2024-11-27 15:07:03.384472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.142 #50 NEW cov: 12586 ft: 15768 corp: 24/1481b lim: 85 exec/s: 50 rss: 74Mb L: 44/85 MS: 1 ChangeBit- 00:07:38.142 [2024-11-27 15:07:03.434928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.142 [2024-11-27 15:07:03.434965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.142 [2024-11-27 15:07:03.435072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.142 [2024-11-27 15:07:03.435094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.142 [2024-11-27 15:07:03.435209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:38.142 [2024-11-27 15:07:03.435247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.142 [2024-11-27 15:07:03.435381] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:38.142 [2024-11-27 15:07:03.435404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.142 #51 NEW cov: 12586 ft: 15792 corp: 25/1559b lim: 85 exec/s: 51 rss: 74Mb L: 78/85 MS: 1 ShuffleBytes- 00:07:38.401 [2024-11-27 15:07:03.505200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.401 [2024-11-27 15:07:03.505228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.401 [2024-11-27 15:07:03.505309] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.401 [2024-11-27 15:07:03.505331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.401 [2024-11-27 15:07:03.505448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:38.401 [2024-11-27 15:07:03.505466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.401 [2024-11-27 15:07:03.505584] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:38.401 [2024-11-27 15:07:03.505605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.401 #52 NEW cov: 12586 ft: 15806 corp: 26/1638b lim: 85 exec/s: 52 rss: 75Mb L: 79/85 MS: 1 ChangeBinInt- 00:07:38.401 [2024-11-27 15:07:03.574888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.401 [2024-11-27 15:07:03.574921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.401 [2024-11-27 15:07:03.575027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.401 [2024-11-27 15:07:03.575051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.401 #53 NEW cov: 12586 ft: 15809 corp: 27/1682b lim: 85 exec/s: 53 rss: 75Mb L: 44/85 MS: 1 ChangeBit- 00:07:38.401 [2024-11-27 15:07:03.645618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.401 [2024-11-27 15:07:03.645654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.401 [2024-11-27 15:07:03.645723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.401 [2024-11-27 15:07:03.645741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.401 [2024-11-27 15:07:03.645853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:38.401 [2024-11-27 15:07:03.645870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.401 [2024-11-27 15:07:03.645992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:38.401 [2024-11-27 15:07:03.646012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.401 #54 NEW cov: 12586 ft: 15822 corp: 28/1760b lim: 85 exec/s: 54 rss: 75Mb L: 78/85 MS: 1 ShuffleBytes- 00:07:38.401 [2024-11-27 15:07:03.715324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.401 [2024-11-27 15:07:03.715354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.401 [2024-11-27 15:07:03.715465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.401 [2024-11-27 15:07:03.715489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.660 #55 NEW cov: 12586 ft: 15837 corp: 29/1805b lim: 85 exec/s: 55 rss: 75Mb L: 45/85 MS: 1 ChangeByte- 00:07:38.660 [2024-11-27 15:07:03.785473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.660 [2024-11-27 15:07:03.785506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.660 [2024-11-27 15:07:03.785627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.660 [2024-11-27 15:07:03.785661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.660 #56 NEW cov: 12586 ft: 15845 corp: 30/1851b lim: 85 exec/s: 28 rss: 75Mb L: 46/85 MS: 1 InsertByte- 00:07:38.660 #56 DONE cov: 12586 ft: 15845 corp: 30/1851b lim: 85 exec/s: 28 rss: 75Mb 00:07:38.660 Done 56 runs in 2 second(s) 00:07:38.660 15:07:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:38.660 15:07:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:38.660 15:07:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:38.660 15:07:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:38.660 15:07:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:38.660 15:07:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:38.660 15:07:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:38.660 15:07:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:38.660 15:07:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:38.660 15:07:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:38.660 15:07:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:38.660 15:07:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:38.660 15:07:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:38.660 15:07:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:38.660 15:07:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:38.660 15:07:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:38.660 15:07:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:38.661 15:07:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:38.661 15:07:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:38.661 [2024-11-27 15:07:03.953844] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:38.661 [2024-11-27 15:07:03.953912] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2375468 ] 00:07:38.919 [2024-11-27 15:07:04.147282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.919 [2024-11-27 15:07:04.184131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.919 [2024-11-27 15:07:04.243387] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.178 [2024-11-27 15:07:04.259720] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:39.178 INFO: Running with entropic power schedule (0xFF, 100). 00:07:39.178 INFO: Seed: 1432935634 00:07:39.178 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:07:39.178 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:07:39.178 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:39.178 INFO: A corpus is not provided, starting from an empty corpus 00:07:39.178 #2 INITED exec/s: 0 rss: 66Mb 00:07:39.178 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:39.178 This may also happen if the target rejected all inputs we tried so far 00:07:39.178 [2024-11-27 15:07:04.318257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.178 [2024-11-27 15:07:04.318286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.178 [2024-11-27 15:07:04.318338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.178 [2024-11-27 15:07:04.318354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.178 [2024-11-27 15:07:04.318408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.178 [2024-11-27 15:07:04.318423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.436 NEW_FUNC[1/717]: 0x466638 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:39.436 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:39.436 #13 NEW cov: 12292 ft: 12291 corp: 2/20b lim: 25 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:07:39.436 [2024-11-27 15:07:04.649431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.436 [2024-11-27 15:07:04.649487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.436 [2024-11-27 15:07:04.649567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.436 [2024-11-27 15:07:04.649596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.436 [2024-11-27 15:07:04.649685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.436 [2024-11-27 15:07:04.649711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.436 [2024-11-27 15:07:04.649789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:39.436 [2024-11-27 15:07:04.649817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.436 #14 NEW cov: 12405 ft: 13522 corp: 3/42b lim: 25 exec/s: 0 rss: 73Mb L: 22/22 MS: 1 InsertRepeatedBytes- 00:07:39.436 [2024-11-27 15:07:04.719352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.436 [2024-11-27 15:07:04.719380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.436 [2024-11-27 15:07:04.719426] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.436 [2024-11-27 15:07:04.719443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.436 [2024-11-27 15:07:04.719496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.436 [2024-11-27 15:07:04.719512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.436 [2024-11-27 15:07:04.719565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:39.436 [2024-11-27 15:07:04.719579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.436 #15 NEW cov: 12411 ft: 13624 corp: 4/64b lim: 25 exec/s: 0 rss: 73Mb L: 22/22 MS: 1 CrossOver- 00:07:39.696 [2024-11-27 15:07:04.779408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.696 [2024-11-27 15:07:04.779436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.696 [2024-11-27 15:07:04.779475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.696 [2024-11-27 15:07:04.779493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.696 [2024-11-27 15:07:04.779547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.696 [2024-11-27 15:07:04.779562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.696 #16 NEW cov: 12496 ft: 13917 corp: 5/82b lim: 25 exec/s: 0 rss: 73Mb L: 18/22 MS: 1 EraseBytes- 00:07:39.696 [2024-11-27 15:07:04.819624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.696 [2024-11-27 15:07:04.819651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.696 [2024-11-27 15:07:04.819708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.696 [2024-11-27 15:07:04.819724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.696 [2024-11-27 15:07:04.819777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.696 [2024-11-27 15:07:04.819794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.696 [2024-11-27 15:07:04.819847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:39.696 [2024-11-27 15:07:04.819863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.696 #17 NEW cov: 12496 ft: 14037 corp: 6/104b lim: 25 exec/s: 0 rss: 74Mb L: 22/22 MS: 1 ChangeByte- 00:07:39.696 [2024-11-27 15:07:04.879774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.696 [2024-11-27 15:07:04.879801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.696 [2024-11-27 15:07:04.879872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.696 [2024-11-27 15:07:04.879886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.696 [2024-11-27 15:07:04.879940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.696 [2024-11-27 15:07:04.879956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.696 [2024-11-27 15:07:04.880009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:39.696 [2024-11-27 15:07:04.880024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.696 #18 NEW cov: 12496 ft: 14103 corp: 7/126b lim: 25 exec/s: 0 rss: 74Mb L: 22/22 MS: 1 ChangeBinInt- 00:07:39.696 [2024-11-27 15:07:04.919871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.696 [2024-11-27 15:07:04.919898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.696 [2024-11-27 15:07:04.919953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.696 [2024-11-27 15:07:04.919968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.696 [2024-11-27 15:07:04.920020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.696 [2024-11-27 15:07:04.920036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.696 [2024-11-27 15:07:04.920092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:39.696 [2024-11-27 15:07:04.920108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.696 #19 NEW cov: 12496 ft: 14199 corp: 8/148b lim: 25 exec/s: 0 rss: 74Mb L: 22/22 MS: 1 ShuffleBytes- 00:07:39.696 [2024-11-27 15:07:04.980029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.696 [2024-11-27 15:07:04.980057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.696 [2024-11-27 15:07:04.980128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.696 [2024-11-27 15:07:04.980142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.696 [2024-11-27 15:07:04.980195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.696 [2024-11-27 15:07:04.980211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.696 [2024-11-27 15:07:04.980264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:39.696 [2024-11-27 15:07:04.980279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.696 #20 NEW cov: 12496 ft: 14256 corp: 9/170b lim: 25 exec/s: 0 rss: 74Mb L: 22/22 MS: 1 ChangeBit- 00:07:39.956 [2024-11-27 15:07:05.040251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.956 [2024-11-27 15:07:05.040281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.956 [2024-11-27 15:07:05.040337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.956 [2024-11-27 15:07:05.040353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.956 [2024-11-27 15:07:05.040406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.956 [2024-11-27 15:07:05.040421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.956 [2024-11-27 15:07:05.040476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:39.956 [2024-11-27 15:07:05.040490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.956 #21 NEW cov: 12496 ft: 14284 corp: 10/192b lim: 25 exec/s: 0 rss: 74Mb L: 22/22 MS: 1 ShuffleBytes- 00:07:39.956 [2024-11-27 15:07:05.080319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.956 [2024-11-27 15:07:05.080346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.956 [2024-11-27 15:07:05.080416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.956 [2024-11-27 15:07:05.080432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.956 [2024-11-27 15:07:05.080483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.956 [2024-11-27 15:07:05.080498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.956 [2024-11-27 15:07:05.080550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:39.956 [2024-11-27 15:07:05.080566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.956 #22 NEW cov: 12496 ft: 14318 corp: 11/216b lim: 25 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 CopyPart- 00:07:39.956 [2024-11-27 15:07:05.120322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.956 [2024-11-27 15:07:05.120348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.956 [2024-11-27 15:07:05.120397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.956 [2024-11-27 15:07:05.120412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.956 [2024-11-27 15:07:05.120465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.956 [2024-11-27 15:07:05.120480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.956 #23 NEW cov: 12496 ft: 14344 corp: 12/234b lim: 25 exec/s: 0 rss: 74Mb L: 18/24 MS: 1 ShuffleBytes- 00:07:39.956 [2024-11-27 15:07:05.180569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.956 [2024-11-27 15:07:05.180601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.956 [2024-11-27 15:07:05.180669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.956 [2024-11-27 15:07:05.180685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.956 [2024-11-27 15:07:05.180737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.956 [2024-11-27 15:07:05.180756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.957 [2024-11-27 15:07:05.180812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:39.957 [2024-11-27 15:07:05.180826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.957 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:39.957 #24 NEW cov: 12519 ft: 14413 corp: 13/258b lim: 25 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 CMP- DE: "\007\000"- 00:07:39.957 [2024-11-27 15:07:05.220717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.957 [2024-11-27 15:07:05.220743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.957 [2024-11-27 15:07:05.220797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.957 [2024-11-27 15:07:05.220811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.957 [2024-11-27 15:07:05.220864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.957 [2024-11-27 15:07:05.220879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.957 [2024-11-27 15:07:05.220932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:39.957 [2024-11-27 15:07:05.220947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.957 #25 NEW cov: 12519 ft: 14496 corp: 14/280b lim: 25 exec/s: 0 rss: 74Mb L: 22/24 MS: 1 CopyPart- 00:07:39.957 [2024-11-27 15:07:05.260784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:39.957 [2024-11-27 15:07:05.260811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.957 [2024-11-27 15:07:05.260882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:39.957 [2024-11-27 15:07:05.260897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.957 [2024-11-27 15:07:05.260950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:39.957 [2024-11-27 15:07:05.260964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.957 [2024-11-27 15:07:05.261018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:39.957 [2024-11-27 15:07:05.261033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.216 #26 NEW cov: 12519 ft: 14523 corp: 15/302b lim: 25 exec/s: 26 rss: 74Mb L: 22/24 MS: 1 ChangeByte- 00:07:40.216 [2024-11-27 15:07:05.320969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.216 [2024-11-27 15:07:05.320996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.216 [2024-11-27 15:07:05.321068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.216 [2024-11-27 15:07:05.321084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.216 [2024-11-27 15:07:05.321137] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:40.216 [2024-11-27 15:07:05.321151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.216 [2024-11-27 15:07:05.321206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:40.216 [2024-11-27 15:07:05.321220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.216 #27 NEW cov: 12519 ft: 14584 corp: 16/325b lim: 25 exec/s: 27 rss: 74Mb L: 23/24 MS: 1 InsertByte- 00:07:40.216 [2024-11-27 15:07:05.380886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.216 [2024-11-27 15:07:05.380913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.216 [2024-11-27 15:07:05.380967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.216 [2024-11-27 15:07:05.380982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.216 #28 NEW cov: 12519 ft: 14920 corp: 17/336b lim: 25 exec/s: 28 rss: 74Mb L: 11/24 MS: 1 EraseBytes- 00:07:40.217 [2024-11-27 15:07:05.421010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.217 [2024-11-27 15:07:05.421036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.217 [2024-11-27 15:07:05.421089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.217 [2024-11-27 15:07:05.421106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.217 #29 NEW cov: 12519 ft: 14950 corp: 18/347b lim: 25 exec/s: 29 rss: 74Mb L: 11/24 MS: 1 ShuffleBytes- 00:07:40.217 [2024-11-27 15:07:05.481184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.217 [2024-11-27 15:07:05.481211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.217 [2024-11-27 15:07:05.481250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.217 [2024-11-27 15:07:05.481267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.217 #30 NEW cov: 12519 ft: 14974 corp: 19/358b lim: 25 exec/s: 30 rss: 74Mb L: 11/24 MS: 1 ChangeByte- 00:07:40.217 [2024-11-27 15:07:05.541582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.217 [2024-11-27 15:07:05.541613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.217 [2024-11-27 15:07:05.541686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.217 [2024-11-27 15:07:05.541701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.217 [2024-11-27 15:07:05.541756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:40.217 [2024-11-27 15:07:05.541772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.217 [2024-11-27 15:07:05.541828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:40.217 [2024-11-27 15:07:05.541843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.476 #31 NEW cov: 12519 ft: 14979 corp: 20/380b lim: 25 exec/s: 31 rss: 74Mb L: 22/24 MS: 1 PersAutoDict- DE: "\007\000"- 00:07:40.476 [2024-11-27 15:07:05.581730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.476 [2024-11-27 15:07:05.581756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.476 [2024-11-27 15:07:05.581801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.476 [2024-11-27 15:07:05.581815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.476 [2024-11-27 15:07:05.581869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:40.476 [2024-11-27 15:07:05.581885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.476 [2024-11-27 15:07:05.581941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:40.476 [2024-11-27 15:07:05.581957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.476 #32 NEW cov: 12519 ft: 15001 corp: 21/400b lim: 25 exec/s: 32 rss: 75Mb L: 20/24 MS: 1 CopyPart- 00:07:40.476 [2024-11-27 15:07:05.641968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.476 [2024-11-27 15:07:05.641995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.476 [2024-11-27 15:07:05.642065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.476 [2024-11-27 15:07:05.642080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.476 [2024-11-27 15:07:05.642136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:40.476 [2024-11-27 15:07:05.642151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.476 [2024-11-27 15:07:05.642206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:40.476 [2024-11-27 15:07:05.642220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.476 [2024-11-27 15:07:05.642276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:40.476 [2024-11-27 15:07:05.642290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:40.476 #36 NEW cov: 12519 ft: 15053 corp: 22/425b lim: 25 exec/s: 36 rss: 75Mb L: 25/25 MS: 4 CrossOver-EraseBytes-ChangeBit-InsertRepeatedBytes- 00:07:40.476 [2024-11-27 15:07:05.681883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.476 [2024-11-27 15:07:05.681910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.476 [2024-11-27 15:07:05.681976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.476 [2024-11-27 15:07:05.681993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.476 [2024-11-27 15:07:05.682048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:40.476 [2024-11-27 15:07:05.682063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.477 #37 NEW cov: 12519 ft: 15084 corp: 23/440b lim: 25 exec/s: 37 rss: 75Mb L: 15/25 MS: 1 EraseBytes- 00:07:40.477 [2024-11-27 15:07:05.742133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.477 [2024-11-27 15:07:05.742159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.477 [2024-11-27 15:07:05.742232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.477 [2024-11-27 15:07:05.742252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.477 [2024-11-27 15:07:05.742306] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:40.477 [2024-11-27 15:07:05.742323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.477 [2024-11-27 15:07:05.742377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:40.477 [2024-11-27 15:07:05.742393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.477 #38 NEW cov: 12519 ft: 15108 corp: 24/464b lim: 25 exec/s: 38 rss: 75Mb L: 24/25 MS: 1 ShuffleBytes- 00:07:40.477 [2024-11-27 15:07:05.802322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.477 [2024-11-27 15:07:05.802348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.477 [2024-11-27 15:07:05.802417] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.477 [2024-11-27 15:07:05.802433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.477 [2024-11-27 15:07:05.802486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:40.477 [2024-11-27 15:07:05.802502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.477 [2024-11-27 15:07:05.802559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:40.477 [2024-11-27 15:07:05.802574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.737 #39 NEW cov: 12519 ft: 15123 corp: 25/488b lim: 25 exec/s: 39 rss: 75Mb L: 24/25 MS: 1 ChangeBit- 00:07:40.737 [2024-11-27 15:07:05.862448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.737 [2024-11-27 15:07:05.862474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.737 [2024-11-27 15:07:05.862545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.737 [2024-11-27 15:07:05.862562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.737 [2024-11-27 15:07:05.862619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:40.737 [2024-11-27 15:07:05.862636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.737 [2024-11-27 15:07:05.862699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:40.737 [2024-11-27 15:07:05.862715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.737 #40 NEW cov: 12519 ft: 15132 corp: 26/510b lim: 25 exec/s: 40 rss: 75Mb L: 22/25 MS: 1 ChangeByte- 00:07:40.737 [2024-11-27 15:07:05.902573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.737 [2024-11-27 15:07:05.902604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.737 [2024-11-27 15:07:05.902678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.737 [2024-11-27 15:07:05.902695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.737 [2024-11-27 15:07:05.902751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:40.737 [2024-11-27 15:07:05.902770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.737 [2024-11-27 15:07:05.902826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:40.737 [2024-11-27 15:07:05.902841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.737 #41 NEW cov: 12519 ft: 15149 corp: 27/532b lim: 25 exec/s: 41 rss: 75Mb L: 22/25 MS: 1 CrossOver- 00:07:40.737 [2024-11-27 15:07:05.942827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.737 [2024-11-27 15:07:05.942854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.737 [2024-11-27 15:07:05.942924] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.737 [2024-11-27 15:07:05.942940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.737 [2024-11-27 15:07:05.942995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:40.737 [2024-11-27 15:07:05.943011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.737 [2024-11-27 15:07:05.943065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:40.737 [2024-11-27 15:07:05.943080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.737 [2024-11-27 15:07:05.943136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:40.737 [2024-11-27 15:07:05.943151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:40.737 #42 NEW cov: 12519 ft: 15186 corp: 28/557b lim: 25 exec/s: 42 rss: 75Mb L: 25/25 MS: 1 CopyPart- 00:07:40.737 [2024-11-27 15:07:06.002627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.737 [2024-11-27 15:07:06.002654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.737 [2024-11-27 15:07:06.002703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.737 [2024-11-27 15:07:06.002718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.737 #43 NEW cov: 12519 ft: 15206 corp: 29/568b lim: 25 exec/s: 43 rss: 75Mb L: 11/25 MS: 1 ChangeBinInt- 00:07:40.737 [2024-11-27 15:07:06.042939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.737 [2024-11-27 15:07:06.042964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.737 [2024-11-27 15:07:06.043034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.737 [2024-11-27 15:07:06.043050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.737 [2024-11-27 15:07:06.043100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:40.737 [2024-11-27 15:07:06.043115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.737 [2024-11-27 15:07:06.043167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:40.737 [2024-11-27 15:07:06.043182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.737 #44 NEW cov: 12519 ft: 15233 corp: 30/592b lim: 25 exec/s: 44 rss: 75Mb L: 24/25 MS: 1 CrossOver- 00:07:40.997 [2024-11-27 15:07:06.083085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.997 [2024-11-27 15:07:06.083112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.997 [2024-11-27 15:07:06.083170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.997 [2024-11-27 15:07:06.083185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.997 [2024-11-27 15:07:06.083236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:40.997 [2024-11-27 15:07:06.083251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.997 [2024-11-27 15:07:06.083303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:40.997 [2024-11-27 15:07:06.083319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.997 #45 NEW cov: 12519 ft: 15319 corp: 31/616b lim: 25 exec/s: 45 rss: 75Mb L: 24/25 MS: 1 ChangeBit- 00:07:40.997 [2024-11-27 15:07:06.143147] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.997 [2024-11-27 15:07:06.143173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.997 [2024-11-27 15:07:06.143240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.997 [2024-11-27 15:07:06.143256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.997 [2024-11-27 15:07:06.143310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:40.997 [2024-11-27 15:07:06.143326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.997 #46 NEW cov: 12519 ft: 15326 corp: 32/631b lim: 25 exec/s: 46 rss: 75Mb L: 15/25 MS: 1 InsertRepeatedBytes- 00:07:40.997 [2024-11-27 15:07:06.203429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.997 [2024-11-27 15:07:06.203455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.997 [2024-11-27 15:07:06.203514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.997 [2024-11-27 15:07:06.203529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.997 [2024-11-27 15:07:06.203582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:40.997 [2024-11-27 15:07:06.203601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.997 [2024-11-27 15:07:06.203653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:40.997 [2024-11-27 15:07:06.203666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.997 #47 NEW cov: 12519 ft: 15335 corp: 33/655b lim: 25 exec/s: 47 rss: 75Mb L: 24/25 MS: 1 CrossOver- 00:07:40.997 [2024-11-27 15:07:06.263593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.997 [2024-11-27 15:07:06.263625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.997 [2024-11-27 15:07:06.263696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.997 [2024-11-27 15:07:06.263712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.997 [2024-11-27 15:07:06.263767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:40.997 [2024-11-27 15:07:06.263783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.997 [2024-11-27 15:07:06.263836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:40.998 [2024-11-27 15:07:06.263851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.998 #48 NEW cov: 12519 ft: 15373 corp: 34/677b lim: 25 exec/s: 48 rss: 75Mb L: 22/25 MS: 1 ChangeByte- 00:07:40.998 [2024-11-27 15:07:06.303852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.998 [2024-11-27 15:07:06.303879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.998 [2024-11-27 15:07:06.303932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.998 [2024-11-27 15:07:06.303947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.998 [2024-11-27 15:07:06.304001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:40.998 [2024-11-27 15:07:06.304016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.998 [2024-11-27 15:07:06.304068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:40.998 [2024-11-27 15:07:06.304084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:40.998 [2024-11-27 15:07:06.304138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:40.998 [2024-11-27 15:07:06.304153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:40.998 #49 NEW cov: 12519 ft: 15384 corp: 35/702b lim: 25 exec/s: 24 rss: 75Mb L: 25/25 MS: 1 CopyPart- 00:07:40.998 #49 DONE cov: 12519 ft: 15384 corp: 35/702b lim: 25 exec/s: 24 rss: 75Mb 00:07:40.998 ###### Recommended dictionary. ###### 00:07:40.998 "\007\000" # Uses: 1 00:07:40.998 ###### End of recommended dictionary. ###### 00:07:40.998 Done 49 runs in 2 second(s) 00:07:41.257 15:07:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:41.257 15:07:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:41.257 15:07:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:41.257 15:07:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:41.257 15:07:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:41.257 15:07:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:41.257 15:07:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:41.257 15:07:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:41.257 15:07:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:41.257 15:07:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:41.257 15:07:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:41.257 15:07:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:41.257 15:07:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:41.257 15:07:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:41.257 15:07:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:41.257 15:07:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:41.257 15:07:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:41.257 15:07:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:41.257 15:07:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:41.257 [2024-11-27 15:07:06.461365] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:41.257 [2024-11-27 15:07:06.461420] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2375855 ] 00:07:41.517 [2024-11-27 15:07:06.651345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.517 [2024-11-27 15:07:06.687197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.517 [2024-11-27 15:07:06.746815] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.517 [2024-11-27 15:07:06.763190] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:41.517 INFO: Running with entropic power schedule (0xFF, 100). 00:07:41.517 INFO: Seed: 3938962387 00:07:41.517 INFO: Loaded 1 modules (389699 inline 8-bit counters): 389699 [0x2c6ea8c, 0x2ccdccf), 00:07:41.517 INFO: Loaded 1 PC tables (389699 PCs): 389699 [0x2ccdcd0,0x32c0100), 00:07:41.517 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:41.517 INFO: A corpus is not provided, starting from an empty corpus 00:07:41.517 #2 INITED exec/s: 0 rss: 66Mb 00:07:41.517 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:41.517 This may also happen if the target rejected all inputs we tried so far 00:07:41.517 [2024-11-27 15:07:06.839912] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069414846463 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.517 [2024-11-27 15:07:06.839956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.517 [2024-11-27 15:07:06.840068] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.517 [2024-11-27 15:07:06.840089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.517 [2024-11-27 15:07:06.840211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.517 [2024-11-27 15:07:06.840234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.517 [2024-11-27 15:07:06.840366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.517 [2024-11-27 15:07:06.840388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:42.036 NEW_FUNC[1/717]: 0x467728 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:42.036 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:42.036 #51 NEW cov: 12342 ft: 12327 corp: 2/91b lim: 100 exec/s: 0 rss: 73Mb L: 90/90 MS: 4 CMP-EraseBytes-InsertRepeatedBytes-InsertRepeatedBytes- DE: "\000\003"- 00:07:42.036 [2024-11-27 15:07:07.170335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.036 [2024-11-27 15:07:07.170394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.036 [2024-11-27 15:07:07.170528] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.036 [2024-11-27 15:07:07.170557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.036 NEW_FUNC[1/1]: 0x105a918 in posix_sock_readv /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1577 00:07:42.036 #58 NEW cov: 12476 ft: 13474 corp: 3/134b lim: 100 exec/s: 0 rss: 73Mb L: 43/90 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:42.036 [2024-11-27 15:07:07.230394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.036 [2024-11-27 15:07:07.230422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.036 [2024-11-27 15:07:07.230538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.036 [2024-11-27 15:07:07.230562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.036 #59 NEW cov: 12482 ft: 13755 corp: 4/177b lim: 100 exec/s: 0 rss: 73Mb L: 43/90 MS: 1 ChangeBit- 00:07:42.036 [2024-11-27 15:07:07.300620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069481693183 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.036 [2024-11-27 15:07:07.300654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.036 [2024-11-27 15:07:07.300771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.036 [2024-11-27 15:07:07.300795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.036 #60 NEW cov: 12567 ft: 13952 corp: 5/234b lim: 100 exec/s: 0 rss: 73Mb L: 57/90 MS: 1 CrossOver- 00:07:42.036 [2024-11-27 15:07:07.350722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.036 [2024-11-27 15:07:07.350756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.036 [2024-11-27 15:07:07.350888] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744072635809791 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.036 [2024-11-27 15:07:07.350912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.294 #66 NEW cov: 12567 ft: 14065 corp: 6/277b lim: 100 exec/s: 0 rss: 73Mb L: 43/90 MS: 1 ChangeBit- 00:07:42.294 [2024-11-27 15:07:07.420996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551583 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.294 [2024-11-27 15:07:07.421030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.294 [2024-11-27 15:07:07.421164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.294 [2024-11-27 15:07:07.421192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.294 #67 NEW cov: 12567 ft: 14161 corp: 7/320b lim: 100 exec/s: 0 rss: 73Mb L: 43/90 MS: 1 ChangeBit- 00:07:42.294 [2024-11-27 15:07:07.471057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.294 [2024-11-27 15:07:07.471090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.294 [2024-11-27 15:07:07.471215] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.294 [2024-11-27 15:07:07.471235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.294 #73 NEW cov: 12567 ft: 14299 corp: 8/363b lim: 100 exec/s: 0 rss: 73Mb L: 43/90 MS: 1 ChangeBinInt- 00:07:42.294 [2024-11-27 15:07:07.521240] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.294 [2024-11-27 15:07:07.521275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.294 [2024-11-27 15:07:07.521384] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744072635809791 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.294 [2024-11-27 15:07:07.521405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.294 #74 NEW cov: 12567 ft: 14330 corp: 9/406b lim: 100 exec/s: 0 rss: 74Mb L: 43/90 MS: 1 ChangeByte- 00:07:42.294 [2024-11-27 15:07:07.591474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446509877732835327 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.294 [2024-11-27 15:07:07.591509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.294 [2024-11-27 15:07:07.591630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744072635809791 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.294 [2024-11-27 15:07:07.591655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.294 #80 NEW cov: 12567 ft: 14369 corp: 10/449b lim: 100 exec/s: 0 rss: 74Mb L: 43/90 MS: 1 ChangeByte- 00:07:42.553 [2024-11-27 15:07:07.641623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.553 [2024-11-27 15:07:07.641654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.553 [2024-11-27 15:07:07.641779] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.553 [2024-11-27 15:07:07.641801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.553 #81 NEW cov: 12567 ft: 14408 corp: 11/492b lim: 100 exec/s: 0 rss: 74Mb L: 43/90 MS: 1 ShuffleBytes- 00:07:42.553 [2024-11-27 15:07:07.691755] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.553 [2024-11-27 15:07:07.691786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.553 [2024-11-27 15:07:07.691889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.553 [2024-11-27 15:07:07.691911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.553 NEW_FUNC[1/1]: 0x1c47d08 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:42.553 #82 NEW cov: 12590 ft: 14460 corp: 12/535b lim: 100 exec/s: 0 rss: 74Mb L: 43/90 MS: 1 ChangeBit- 00:07:42.553 [2024-11-27 15:07:07.741862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069481693183 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.553 [2024-11-27 15:07:07.741899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.553 [2024-11-27 15:07:07.742020] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.553 [2024-11-27 15:07:07.742042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.553 #83 NEW cov: 12590 ft: 14471 corp: 13/592b lim: 100 exec/s: 0 rss: 74Mb L: 57/90 MS: 1 ShuffleBytes- 00:07:42.553 [2024-11-27 15:07:07.812153] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.553 [2024-11-27 15:07:07.812186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.553 [2024-11-27 15:07:07.812314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.553 [2024-11-27 15:07:07.812334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.553 #84 NEW cov: 12590 ft: 14512 corp: 14/635b lim: 100 exec/s: 84 rss: 74Mb L: 43/90 MS: 1 PersAutoDict- DE: "\000\003"- 00:07:42.553 [2024-11-27 15:07:07.862551] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.553 [2024-11-27 15:07:07.862587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.553 [2024-11-27 15:07:07.862707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446462603027808255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.553 [2024-11-27 15:07:07.862732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.553 [2024-11-27 15:07:07.862853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.553 [2024-11-27 15:07:07.862874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.553 #85 NEW cov: 12590 ft: 14815 corp: 15/704b lim: 100 exec/s: 85 rss: 74Mb L: 69/90 MS: 1 InsertRepeatedBytes- 00:07:42.812 [2024-11-27 15:07:07.912291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069481693183 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.812 [2024-11-27 15:07:07.912324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.812 [2024-11-27 15:07:07.912445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.812 [2024-11-27 15:07:07.912469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.812 #91 NEW cov: 12590 ft: 14865 corp: 16/761b lim: 100 exec/s: 91 rss: 74Mb L: 57/90 MS: 1 ChangeByte- 00:07:42.812 [2024-11-27 15:07:07.962544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.812 [2024-11-27 15:07:07.962579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.812 [2024-11-27 15:07:07.962715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.812 [2024-11-27 15:07:07.962739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.812 #92 NEW cov: 12590 ft: 14881 corp: 17/804b lim: 100 exec/s: 92 rss: 74Mb L: 43/90 MS: 1 ShuffleBytes- 00:07:42.812 [2024-11-27 15:07:08.032831] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:12008468690265089702 len:42663 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.812 [2024-11-27 15:07:08.032867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.812 [2024-11-27 15:07:08.032987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:12008468691120727718 len:42663 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.812 [2024-11-27 15:07:08.033011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.812 #94 NEW cov: 12590 ft: 14914 corp: 18/844b lim: 100 exec/s: 94 rss: 74Mb L: 40/90 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:42.812 [2024-11-27 15:07:08.082884] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.812 [2024-11-27 15:07:08.082920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.812 [2024-11-27 15:07:08.083040] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.812 [2024-11-27 15:07:08.083061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.812 #95 NEW cov: 12590 ft: 14935 corp: 19/887b lim: 100 exec/s: 95 rss: 74Mb L: 43/90 MS: 1 ShuffleBytes- 00:07:43.071 [2024-11-27 15:07:08.153246] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.071 [2024-11-27 15:07:08.153281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.071 [2024-11-27 15:07:08.153400] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.071 [2024-11-27 15:07:08.153424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.071 #96 NEW cov: 12590 ft: 14952 corp: 20/934b lim: 100 exec/s: 96 rss: 74Mb L: 47/90 MS: 1 CopyPart- 00:07:43.071 [2024-11-27 15:07:08.223341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.071 [2024-11-27 15:07:08.223376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.071 [2024-11-27 15:07:08.223505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446548699942223871 len:20047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.071 [2024-11-27 15:07:08.223529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.071 #97 NEW cov: 12590 ft: 14954 corp: 21/992b lim: 100 exec/s: 97 rss: 74Mb L: 58/90 MS: 1 InsertRepeatedBytes- 00:07:43.071 [2024-11-27 15:07:08.273991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.071 [2024-11-27 15:07:08.274027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.071 [2024-11-27 15:07:08.274109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446463603755188223 len:59882 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.071 [2024-11-27 15:07:08.274135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.071 [2024-11-27 15:07:08.274256] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:3909091328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.071 [2024-11-27 15:07:08.274280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.071 [2024-11-27 15:07:08.274401] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:281470681743360 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.071 [2024-11-27 15:07:08.274426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.071 #98 NEW cov: 12590 ft: 14961 corp: 22/1075b lim: 100 exec/s: 98 rss: 74Mb L: 83/90 MS: 1 InsertRepeatedBytes- 00:07:43.071 [2024-11-27 15:07:08.343691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.071 [2024-11-27 15:07:08.343727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.071 [2024-11-27 15:07:08.343845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.071 [2024-11-27 15:07:08.343865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.071 #99 NEW cov: 12590 ft: 14972 corp: 23/1122b lim: 100 exec/s: 99 rss: 74Mb L: 47/90 MS: 1 ShuffleBytes- 00:07:43.330 [2024-11-27 15:07:08.413948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446509877732835327 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.330 [2024-11-27 15:07:08.413979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.330 [2024-11-27 15:07:08.414042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744072635809791 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.330 [2024-11-27 15:07:08.414060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.330 #100 NEW cov: 12590 ft: 15009 corp: 24/1165b lim: 100 exec/s: 100 rss: 74Mb L: 43/90 MS: 1 ChangeByte- 00:07:43.330 [2024-11-27 15:07:08.484122] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.330 [2024-11-27 15:07:08.484156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.330 [2024-11-27 15:07:08.484278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.330 [2024-11-27 15:07:08.484302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.330 #101 NEW cov: 12590 ft: 15043 corp: 25/1208b lim: 100 exec/s: 101 rss: 74Mb L: 43/90 MS: 1 ShuffleBytes- 00:07:43.330 [2024-11-27 15:07:08.534196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.330 [2024-11-27 15:07:08.534232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.330 [2024-11-27 15:07:08.534347] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709546751 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.330 [2024-11-27 15:07:08.534376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.330 #102 NEW cov: 12590 ft: 15071 corp: 26/1256b lim: 100 exec/s: 102 rss: 74Mb L: 48/90 MS: 1 InsertByte- 00:07:43.330 [2024-11-27 15:07:08.604480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446509877732835327 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.330 [2024-11-27 15:07:08.604514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.330 [2024-11-27 15:07:08.604641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744072635809791 len:65290 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.330 [2024-11-27 15:07:08.604666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.330 #103 NEW cov: 12590 ft: 15082 corp: 27/1299b lim: 100 exec/s: 103 rss: 75Mb L: 43/90 MS: 1 ChangeBinInt- 00:07:43.330 [2024-11-27 15:07:08.654627] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069481693183 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.330 [2024-11-27 15:07:08.654658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.330 [2024-11-27 15:07:08.654776] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.330 [2024-11-27 15:07:08.654802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.589 #104 NEW cov: 12590 ft: 15094 corp: 28/1356b lim: 100 exec/s: 104 rss: 75Mb L: 57/90 MS: 1 PersAutoDict- DE: "\000\003"- 00:07:43.589 [2024-11-27 15:07:08.704532] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069582618623 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.589 [2024-11-27 15:07:08.704560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.589 #105 NEW cov: 12590 ft: 15870 corp: 29/1391b lim: 100 exec/s: 105 rss: 75Mb L: 35/90 MS: 1 CrossOver- 00:07:43.589 [2024-11-27 15:07:08.754971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.589 [2024-11-27 15:07:08.755004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.589 [2024-11-27 15:07:08.755117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.589 [2024-11-27 15:07:08.755155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.589 #106 NEW cov: 12590 ft: 15903 corp: 30/1434b lim: 100 exec/s: 106 rss: 75Mb L: 43/90 MS: 1 ShuffleBytes- 00:07:43.589 [2024-11-27 15:07:08.824913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069582618623 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.590 [2024-11-27 15:07:08.824939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.590 #107 NEW cov: 12590 ft: 15904 corp: 31/1469b lim: 100 exec/s: 53 rss: 75Mb L: 35/90 MS: 1 ChangeBit- 00:07:43.590 #107 DONE cov: 12590 ft: 15904 corp: 31/1469b lim: 100 exec/s: 53 rss: 75Mb 00:07:43.590 ###### Recommended dictionary. ###### 00:07:43.590 "\000\003" # Uses: 5 00:07:43.590 ###### End of recommended dictionary. ###### 00:07:43.590 Done 107 runs in 2 second(s) 00:07:43.849 15:07:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:43.849 15:07:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:43.849 15:07:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:43.849 15:07:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:43.849 00:07:43.849 real 1m3.260s 00:07:43.849 user 1m39.827s 00:07:43.849 sys 0m7.129s 00:07:43.849 15:07:08 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.849 15:07:08 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:43.849 ************************************ 00:07:43.849 END TEST nvmf_llvm_fuzz 00:07:43.849 ************************************ 00:07:43.849 15:07:09 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:43.849 15:07:09 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:43.849 15:07:09 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:43.849 15:07:09 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.849 15:07:09 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.849 15:07:09 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:43.849 ************************************ 00:07:43.849 START TEST vfio_llvm_fuzz 00:07:43.849 ************************************ 00:07:43.849 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:43.849 * Looking for test storage... 00:07:43.849 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:43.849 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:43.849 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:07:43.849 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:44.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.109 --rc genhtml_branch_coverage=1 00:07:44.109 --rc genhtml_function_coverage=1 00:07:44.109 --rc genhtml_legend=1 00:07:44.109 --rc geninfo_all_blocks=1 00:07:44.109 --rc geninfo_unexecuted_blocks=1 00:07:44.109 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:44.109 ' 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:44.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.109 --rc genhtml_branch_coverage=1 00:07:44.109 --rc genhtml_function_coverage=1 00:07:44.109 --rc genhtml_legend=1 00:07:44.109 --rc geninfo_all_blocks=1 00:07:44.109 --rc geninfo_unexecuted_blocks=1 00:07:44.109 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:44.109 ' 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:44.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.109 --rc genhtml_branch_coverage=1 00:07:44.109 --rc genhtml_function_coverage=1 00:07:44.109 --rc genhtml_legend=1 00:07:44.109 --rc geninfo_all_blocks=1 00:07:44.109 --rc geninfo_unexecuted_blocks=1 00:07:44.109 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:44.109 ' 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:44.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.109 --rc genhtml_branch_coverage=1 00:07:44.109 --rc genhtml_function_coverage=1 00:07:44.109 --rc genhtml_legend=1 00:07:44.109 --rc geninfo_all_blocks=1 00:07:44.109 --rc geninfo_unexecuted_blocks=1 00:07:44.109 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:44.109 ' 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:44.109 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:44.110 #define SPDK_CONFIG_H 00:07:44.110 #define SPDK_CONFIG_AIO_FSDEV 1 00:07:44.110 #define SPDK_CONFIG_APPS 1 00:07:44.110 #define SPDK_CONFIG_ARCH native 00:07:44.110 #undef SPDK_CONFIG_ASAN 00:07:44.110 #undef SPDK_CONFIG_AVAHI 00:07:44.110 #undef SPDK_CONFIG_CET 00:07:44.110 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:07:44.110 #define SPDK_CONFIG_COVERAGE 1 00:07:44.110 #define SPDK_CONFIG_CROSS_PREFIX 00:07:44.110 #undef SPDK_CONFIG_CRYPTO 00:07:44.110 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:44.110 #undef SPDK_CONFIG_CUSTOMOCF 00:07:44.110 #undef SPDK_CONFIG_DAOS 00:07:44.110 #define SPDK_CONFIG_DAOS_DIR 00:07:44.110 #define SPDK_CONFIG_DEBUG 1 00:07:44.110 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:44.110 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:44.110 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:44.110 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:44.110 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:44.110 #undef SPDK_CONFIG_DPDK_UADK 00:07:44.110 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:44.110 #define SPDK_CONFIG_EXAMPLES 1 00:07:44.110 #undef SPDK_CONFIG_FC 00:07:44.110 #define SPDK_CONFIG_FC_PATH 00:07:44.110 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:44.110 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:44.110 #define SPDK_CONFIG_FSDEV 1 00:07:44.110 #undef SPDK_CONFIG_FUSE 00:07:44.110 #define SPDK_CONFIG_FUZZER 1 00:07:44.110 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:44.110 #undef SPDK_CONFIG_GOLANG 00:07:44.110 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:44.110 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:44.110 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:44.110 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:44.110 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:44.110 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:44.110 #undef SPDK_CONFIG_HAVE_LZ4 00:07:44.110 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:07:44.110 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:07:44.110 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:44.110 #define SPDK_CONFIG_IDXD 1 00:07:44.110 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:44.110 #undef SPDK_CONFIG_IPSEC_MB 00:07:44.110 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:44.110 #define SPDK_CONFIG_ISAL 1 00:07:44.110 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:44.110 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:44.110 #define SPDK_CONFIG_LIBDIR 00:07:44.110 #undef SPDK_CONFIG_LTO 00:07:44.110 #define SPDK_CONFIG_MAX_LCORES 128 00:07:44.110 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:07:44.110 #define SPDK_CONFIG_NVME_CUSE 1 00:07:44.110 #undef SPDK_CONFIG_OCF 00:07:44.110 #define SPDK_CONFIG_OCF_PATH 00:07:44.110 #define SPDK_CONFIG_OPENSSL_PATH 00:07:44.110 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:44.110 #define SPDK_CONFIG_PGO_DIR 00:07:44.110 #undef SPDK_CONFIG_PGO_USE 00:07:44.110 #define SPDK_CONFIG_PREFIX /usr/local 00:07:44.110 #undef SPDK_CONFIG_RAID5F 00:07:44.110 #undef SPDK_CONFIG_RBD 00:07:44.110 #define SPDK_CONFIG_RDMA 1 00:07:44.110 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:44.110 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:44.110 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:44.110 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:44.110 #undef SPDK_CONFIG_SHARED 00:07:44.110 #undef SPDK_CONFIG_SMA 00:07:44.110 #define SPDK_CONFIG_TESTS 1 00:07:44.110 #undef SPDK_CONFIG_TSAN 00:07:44.110 #define SPDK_CONFIG_UBLK 1 00:07:44.110 #define SPDK_CONFIG_UBSAN 1 00:07:44.110 #undef SPDK_CONFIG_UNIT_TESTS 00:07:44.110 #undef SPDK_CONFIG_URING 00:07:44.110 #define SPDK_CONFIG_URING_PATH 00:07:44.110 #undef SPDK_CONFIG_URING_ZNS 00:07:44.110 #undef SPDK_CONFIG_USDT 00:07:44.110 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:44.110 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:44.110 #define SPDK_CONFIG_VFIO_USER 1 00:07:44.110 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:44.110 #define SPDK_CONFIG_VHOST 1 00:07:44.110 #define SPDK_CONFIG_VIRTIO 1 00:07:44.110 #undef SPDK_CONFIG_VTUNE 00:07:44.110 #define SPDK_CONFIG_VTUNE_DIR 00:07:44.110 #define SPDK_CONFIG_WERROR 1 00:07:44.110 #define SPDK_CONFIG_WPDK_DIR 00:07:44.110 #undef SPDK_CONFIG_XNVME 00:07:44.110 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:44.110 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:07:44.111 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 2376322 ]] 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 2376322 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.HDguri 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.HDguri/tests/vfio /tmp/spdk.HDguri 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=52919160832 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730607104 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=8811446272 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30860537856 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865301504 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=12340129792 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346122240 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5992448 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30863646720 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865305600 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=1658880 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=6173044736 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173057024 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:07:44.112 * Looking for test storage... 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=52919160832 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=11026038784 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:44.112 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # set -o errtrace 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1685 -- # true 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1687 -- # xtrace_fd 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:07:44.112 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:44.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.371 --rc genhtml_branch_coverage=1 00:07:44.371 --rc genhtml_function_coverage=1 00:07:44.371 --rc genhtml_legend=1 00:07:44.371 --rc geninfo_all_blocks=1 00:07:44.371 --rc geninfo_unexecuted_blocks=1 00:07:44.371 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:44.371 ' 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:44.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.371 --rc genhtml_branch_coverage=1 00:07:44.371 --rc genhtml_function_coverage=1 00:07:44.371 --rc genhtml_legend=1 00:07:44.371 --rc geninfo_all_blocks=1 00:07:44.371 --rc geninfo_unexecuted_blocks=1 00:07:44.371 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:44.371 ' 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:44.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.371 --rc genhtml_branch_coverage=1 00:07:44.371 --rc genhtml_function_coverage=1 00:07:44.371 --rc genhtml_legend=1 00:07:44.371 --rc geninfo_all_blocks=1 00:07:44.371 --rc geninfo_unexecuted_blocks=1 00:07:44.371 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:44.371 ' 00:07:44.371 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:44.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.372 --rc genhtml_branch_coverage=1 00:07:44.372 --rc genhtml_function_coverage=1 00:07:44.372 --rc genhtml_legend=1 00:07:44.372 --rc geninfo_all_blocks=1 00:07:44.372 --rc geninfo_unexecuted_blocks=1 00:07:44.372 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:44.372 ' 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:44.372 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:44.372 15:07:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:44.372 [2024-11-27 15:07:09.548683] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:44.372 [2024-11-27 15:07:09.548753] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376466 ] 00:07:44.372 [2024-11-27 15:07:09.629837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.372 [2024-11-27 15:07:09.672880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.631 INFO: Running with entropic power schedule (0xFF, 100). 00:07:44.631 INFO: Seed: 2721962964 00:07:44.631 INFO: Loaded 1 modules (386935 inline 8-bit counters): 386935 [0x2c2f28c, 0x2c8da03), 00:07:44.631 INFO: Loaded 1 PC tables (386935 PCs): 386935 [0x2c8da08,0x3275178), 00:07:44.631 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:44.631 INFO: A corpus is not provided, starting from an empty corpus 00:07:44.631 #2 INITED exec/s: 0 rss: 68Mb 00:07:44.631 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:44.631 This may also happen if the target rejected all inputs we tried so far 00:07:44.631 [2024-11-27 15:07:09.909462] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:45.149 NEW_FUNC[1/675]: 0x43b5e8 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:45.150 NEW_FUNC[2/675]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:45.150 #17 NEW cov: 11209 ft: 11158 corp: 2/7b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 5 InsertRepeatedBytes-CopyPart-ChangeBinInt-InsertByte-CopyPart- 00:07:45.408 NEW_FUNC[1/1]: 0x1927108 in _nvme_qpair_complete_abort_queued_reqs /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:593 00:07:45.408 #18 NEW cov: 11236 ft: 14640 corp: 3/13b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 ChangeBit- 00:07:45.667 NEW_FUNC[1/1]: 0x1c14158 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:45.667 #29 NEW cov: 11253 ft: 15455 corp: 4/19b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 1 CMP- DE: "\366\377\377\377"- 00:07:45.667 #40 NEW cov: 11256 ft: 15758 corp: 5/25b lim: 6 exec/s: 40 rss: 76Mb L: 6/6 MS: 1 ChangeByte- 00:07:45.926 #41 NEW cov: 11256 ft: 16103 corp: 6/31b lim: 6 exec/s: 41 rss: 76Mb L: 6/6 MS: 1 CrossOver- 00:07:46.185 #42 NEW cov: 11256 ft: 16827 corp: 7/37b lim: 6 exec/s: 42 rss: 76Mb L: 6/6 MS: 1 CopyPart- 00:07:46.185 #48 NEW cov: 11256 ft: 17219 corp: 8/43b lim: 6 exec/s: 48 rss: 76Mb L: 6/6 MS: 1 ChangeBit- 00:07:46.444 #49 NEW cov: 11263 ft: 17267 corp: 9/49b lim: 6 exec/s: 49 rss: 76Mb L: 6/6 MS: 1 CopyPart- 00:07:46.703 #50 NEW cov: 11263 ft: 17990 corp: 10/55b lim: 6 exec/s: 25 rss: 77Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:46.703 #50 DONE cov: 11263 ft: 17990 corp: 10/55b lim: 6 exec/s: 25 rss: 77Mb 00:07:46.703 ###### Recommended dictionary. ###### 00:07:46.703 "\366\377\377\377" # Uses: 1 00:07:46.703 ###### End of recommended dictionary. ###### 00:07:46.703 Done 50 runs in 2 second(s) 00:07:46.703 [2024-11-27 15:07:11.905788] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:46.962 15:07:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:46.962 15:07:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:46.962 15:07:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:46.962 15:07:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:46.962 15:07:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:46.962 15:07:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:46.962 15:07:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:46.962 15:07:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:46.962 15:07:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:46.962 15:07:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:46.962 15:07:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:46.962 15:07:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:46.962 15:07:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:46.962 15:07:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:46.962 15:07:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:46.962 15:07:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:46.962 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:46.962 15:07:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:46.962 15:07:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:46.962 15:07:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:46.962 [2024-11-27 15:07:12.174073] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:46.962 [2024-11-27 15:07:12.174145] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376922 ] 00:07:46.962 [2024-11-27 15:07:12.254681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.962 [2024-11-27 15:07:12.293681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.221 INFO: Running with entropic power schedule (0xFF, 100). 00:07:47.221 INFO: Seed: 1053014356 00:07:47.221 INFO: Loaded 1 modules (386935 inline 8-bit counters): 386935 [0x2c2f28c, 0x2c8da03), 00:07:47.221 INFO: Loaded 1 PC tables (386935 PCs): 386935 [0x2c8da08,0x3275178), 00:07:47.221 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:47.221 INFO: A corpus is not provided, starting from an empty corpus 00:07:47.221 #2 INITED exec/s: 0 rss: 67Mb 00:07:47.221 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:47.221 This may also happen if the target rejected all inputs we tried so far 00:07:47.221 [2024-11-27 15:07:12.537495] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:47.480 [2024-11-27 15:07:12.590641] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:47.480 [2024-11-27 15:07:12.590670] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:47.480 [2024-11-27 15:07:12.590688] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:47.740 NEW_FUNC[1/678]: 0x43bb88 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:47.740 NEW_FUNC[2/678]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:47.740 #7 NEW cov: 11218 ft: 11186 corp: 2/5b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 5 CrossOver-CrossOver-ChangeByte-InsertByte-InsertByte- 00:07:47.740 [2024-11-27 15:07:13.043408] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:47.740 [2024-11-27 15:07:13.043459] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:47.740 [2024-11-27 15:07:13.043478] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:47.999 #13 NEW cov: 11235 ft: 14575 corp: 3/9b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:47.999 [2024-11-27 15:07:13.212783] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:47.999 [2024-11-27 15:07:13.212806] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:47.999 [2024-11-27 15:07:13.212822] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:47.999 NEW_FUNC[1/1]: 0x1c14158 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:47.999 #14 NEW cov: 11252 ft: 15168 corp: 4/13b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 CopyPart- 00:07:48.258 [2024-11-27 15:07:13.380582] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:48.258 [2024-11-27 15:07:13.380613] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:48.258 [2024-11-27 15:07:13.380631] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:48.258 #15 NEW cov: 11252 ft: 15782 corp: 5/17b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 CrossOver- 00:07:48.258 [2024-11-27 15:07:13.550997] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:48.258 [2024-11-27 15:07:13.551019] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:48.258 [2024-11-27 15:07:13.551037] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:48.516 #17 NEW cov: 11252 ft: 17107 corp: 6/21b lim: 4 exec/s: 17 rss: 75Mb L: 4/4 MS: 2 ChangeBit-CrossOver- 00:07:48.516 [2024-11-27 15:07:13.731780] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:48.516 [2024-11-27 15:07:13.731802] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:48.516 [2024-11-27 15:07:13.731817] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:48.516 #19 NEW cov: 11252 ft: 17474 corp: 7/25b lim: 4 exec/s: 19 rss: 75Mb L: 4/4 MS: 2 EraseBytes-CrossOver- 00:07:48.775 [2024-11-27 15:07:13.901657] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:48.775 [2024-11-27 15:07:13.901681] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:48.775 [2024-11-27 15:07:13.901699] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:48.775 #20 NEW cov: 11252 ft: 17667 corp: 8/29b lim: 4 exec/s: 20 rss: 75Mb L: 4/4 MS: 1 ChangeByte- 00:07:48.775 [2024-11-27 15:07:14.062255] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:48.775 [2024-11-27 15:07:14.062278] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:48.775 [2024-11-27 15:07:14.062295] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:49.035 #21 NEW cov: 11252 ft: 17825 corp: 9/33b lim: 4 exec/s: 21 rss: 75Mb L: 4/4 MS: 1 ChangeBit- 00:07:49.035 [2024-11-27 15:07:14.222512] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:49.035 [2024-11-27 15:07:14.222535] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:49.035 [2024-11-27 15:07:14.222552] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:49.035 #22 NEW cov: 11259 ft: 17877 corp: 10/37b lim: 4 exec/s: 22 rss: 75Mb L: 4/4 MS: 1 ChangeBit- 00:07:49.294 [2024-11-27 15:07:14.389179] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:49.294 [2024-11-27 15:07:14.389203] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:49.294 [2024-11-27 15:07:14.389222] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:49.294 #30 NEW cov: 11259 ft: 18033 corp: 11/41b lim: 4 exec/s: 30 rss: 75Mb L: 4/4 MS: 3 EraseBytes-ShuffleBytes-InsertByte- 00:07:49.294 [2024-11-27 15:07:14.559981] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:49.294 [2024-11-27 15:07:14.560003] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:49.294 [2024-11-27 15:07:14.560018] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:49.553 #31 NEW cov: 11259 ft: 18170 corp: 12/45b lim: 4 exec/s: 15 rss: 75Mb L: 4/4 MS: 1 CopyPart- 00:07:49.553 #31 DONE cov: 11259 ft: 18170 corp: 12/45b lim: 4 exec/s: 15 rss: 75Mb 00:07:49.553 Done 31 runs in 2 second(s) 00:07:49.553 [2024-11-27 15:07:14.682804] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:49.812 15:07:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:49.812 15:07:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:49.812 15:07:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:49.812 15:07:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:49.812 15:07:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:49.812 15:07:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:49.812 15:07:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:49.813 15:07:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:49.813 15:07:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:49.813 15:07:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:49.813 15:07:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:49.813 15:07:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:49.813 15:07:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:49.813 15:07:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:49.813 15:07:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:49.813 15:07:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:49.813 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:49.813 15:07:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:49.813 15:07:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:49.813 15:07:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:49.813 [2024-11-27 15:07:14.946913] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:49.813 [2024-11-27 15:07:14.946985] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2377460 ] 00:07:49.813 [2024-11-27 15:07:15.027231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.813 [2024-11-27 15:07:15.066711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.071 INFO: Running with entropic power schedule (0xFF, 100). 00:07:50.071 INFO: Seed: 3825981834 00:07:50.071 INFO: Loaded 1 modules (386935 inline 8-bit counters): 386935 [0x2c2f28c, 0x2c8da03), 00:07:50.071 INFO: Loaded 1 PC tables (386935 PCs): 386935 [0x2c8da08,0x3275178), 00:07:50.071 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:50.071 INFO: A corpus is not provided, starting from an empty corpus 00:07:50.071 #2 INITED exec/s: 0 rss: 67Mb 00:07:50.072 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:50.072 This may also happen if the target rejected all inputs we tried so far 00:07:50.072 [2024-11-27 15:07:15.311449] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:50.072 [2024-11-27 15:07:15.353842] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:50.588 NEW_FUNC[1/676]: 0x43c578 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:50.588 NEW_FUNC[2/676]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:50.588 #6 NEW cov: 11187 ft: 11149 corp: 2/9b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 4 ChangeByte-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:07:50.588 [2024-11-27 15:07:15.817333] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:50.847 NEW_FUNC[1/1]: 0x1fb80b8 in msg_queue_run_batch /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:833 00:07:50.847 #17 NEW cov: 11214 ft: 14530 corp: 3/17b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:07:50.847 [2024-11-27 15:07:16.001887] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:50.847 NEW_FUNC[1/1]: 0x1c14158 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:50.847 #25 NEW cov: 11231 ft: 15136 corp: 4/25b lim: 8 exec/s: 0 rss: 76Mb L: 8/8 MS: 3 CrossOver-InsertRepeatedBytes-InsertByte- 00:07:50.847 [2024-11-27 15:07:16.174961] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:51.107 #26 NEW cov: 11231 ft: 15872 corp: 5/33b lim: 8 exec/s: 26 rss: 76Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:51.107 [2024-11-27 15:07:16.346950] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:51.369 #30 NEW cov: 11231 ft: 16052 corp: 6/41b lim: 8 exec/s: 30 rss: 76Mb L: 8/8 MS: 4 ShuffleBytes-ChangeByte-InsertByte-InsertRepeatedBytes- 00:07:51.369 [2024-11-27 15:07:16.526703] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:51.369 #36 NEW cov: 11231 ft: 16396 corp: 7/49b lim: 8 exec/s: 36 rss: 76Mb L: 8/8 MS: 1 ChangeByte- 00:07:51.369 [2024-11-27 15:07:16.695701] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:51.627 #37 NEW cov: 11231 ft: 16906 corp: 8/57b lim: 8 exec/s: 37 rss: 76Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:51.627 [2024-11-27 15:07:16.866073] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:51.887 #41 NEW cov: 11231 ft: 17176 corp: 9/65b lim: 8 exec/s: 41 rss: 76Mb L: 8/8 MS: 4 EraseBytes-InsertByte-ChangeBit-CopyPart- 00:07:51.887 [2024-11-27 15:07:17.035482] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:51.887 #42 NEW cov: 11238 ft: 17481 corp: 10/73b lim: 8 exec/s: 42 rss: 76Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:51.887 [2024-11-27 15:07:17.207988] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:52.146 #43 NEW cov: 11238 ft: 17732 corp: 11/81b lim: 8 exec/s: 21 rss: 76Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:52.146 #43 DONE cov: 11238 ft: 17732 corp: 11/81b lim: 8 exec/s: 21 rss: 76Mb 00:07:52.146 Done 43 runs in 2 second(s) 00:07:52.146 [2024-11-27 15:07:17.332791] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:52.406 15:07:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:52.406 15:07:17 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:52.406 15:07:17 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:52.406 15:07:17 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:52.406 15:07:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:52.406 15:07:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:52.406 15:07:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:52.406 15:07:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:52.406 15:07:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:52.406 15:07:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:52.406 15:07:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:52.406 15:07:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:52.406 15:07:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:52.406 15:07:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:52.406 15:07:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:52.406 15:07:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:52.406 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:52.406 15:07:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:52.406 15:07:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:52.406 15:07:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:52.406 [2024-11-27 15:07:17.596929] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:52.406 [2024-11-27 15:07:17.597002] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2377978 ] 00:07:52.406 [2024-11-27 15:07:17.679315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.406 [2024-11-27 15:07:17.719503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.665 INFO: Running with entropic power schedule (0xFF, 100). 00:07:52.665 INFO: Seed: 2179020027 00:07:52.666 INFO: Loaded 1 modules (386935 inline 8-bit counters): 386935 [0x2c2f28c, 0x2c8da03), 00:07:52.666 INFO: Loaded 1 PC tables (386935 PCs): 386935 [0x2c8da08,0x3275178), 00:07:52.666 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:52.666 INFO: A corpus is not provided, starting from an empty corpus 00:07:52.666 #2 INITED exec/s: 0 rss: 67Mb 00:07:52.666 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:52.666 This may also happen if the target rejected all inputs we tried so far 00:07:52.666 [2024-11-27 15:07:17.958364] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:53.183 NEW_FUNC[1/677]: 0x43cc68 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:53.183 NEW_FUNC[2/677]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:53.183 #6 NEW cov: 11208 ft: 11109 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 4 CMP-InsertRepeatedBytes-InsertByte-InsertRepeatedBytes- DE: "\377\377\377\377\377\377\001E"- 00:07:53.441 #12 NEW cov: 11222 ft: 14929 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:53.441 NEW_FUNC[1/1]: 0x1c14158 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:53.441 #23 NEW cov: 11239 ft: 15619 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:07:53.699 #24 NEW cov: 11239 ft: 16915 corp: 5/129b lim: 32 exec/s: 24 rss: 76Mb L: 32/32 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\001E"- 00:07:53.957 #25 NEW cov: 11239 ft: 17443 corp: 6/161b lim: 32 exec/s: 25 rss: 76Mb L: 32/32 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\001E"- 00:07:54.215 #26 NEW cov: 11239 ft: 17696 corp: 7/193b lim: 32 exec/s: 26 rss: 77Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:54.215 #32 NEW cov: 11239 ft: 18042 corp: 8/225b lim: 32 exec/s: 32 rss: 77Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:54.473 #33 NEW cov: 11239 ft: 18105 corp: 9/257b lim: 32 exec/s: 33 rss: 77Mb L: 32/32 MS: 1 CopyPart- 00:07:54.732 #34 NEW cov: 11246 ft: 18475 corp: 10/289b lim: 32 exec/s: 34 rss: 77Mb L: 32/32 MS: 1 CopyPart- 00:07:54.732 #43 NEW cov: 11246 ft: 18556 corp: 11/321b lim: 32 exec/s: 21 rss: 77Mb L: 32/32 MS: 4 CrossOver-CopyPart-ChangeBit-CopyPart- 00:07:54.732 #43 DONE cov: 11246 ft: 18556 corp: 11/321b lim: 32 exec/s: 21 rss: 77Mb 00:07:54.732 ###### Recommended dictionary. ###### 00:07:54.732 "\377\377\377\377\377\377\001E" # Uses: 4 00:07:54.732 ###### End of recommended dictionary. ###### 00:07:54.732 Done 43 runs in 2 second(s) 00:07:54.732 [2024-11-27 15:07:20.023828] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:54.992 15:07:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:54.992 15:07:20 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:54.992 15:07:20 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:54.992 15:07:20 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:54.992 15:07:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:54.992 15:07:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:54.992 15:07:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:54.992 15:07:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:54.992 15:07:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:54.992 15:07:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:54.992 15:07:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:54.992 15:07:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:54.992 15:07:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:54.992 15:07:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:54.992 15:07:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:54.992 15:07:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:54.992 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:54.992 15:07:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:54.992 15:07:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:54.992 15:07:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:54.992 [2024-11-27 15:07:20.293736] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:54.992 [2024-11-27 15:07:20.293807] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2378330 ] 00:07:55.251 [2024-11-27 15:07:20.374939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.251 [2024-11-27 15:07:20.416385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.509 INFO: Running with entropic power schedule (0xFF, 100). 00:07:55.509 INFO: Seed: 586065405 00:07:55.509 INFO: Loaded 1 modules (386935 inline 8-bit counters): 386935 [0x2c2f28c, 0x2c8da03), 00:07:55.509 INFO: Loaded 1 PC tables (386935 PCs): 386935 [0x2c8da08,0x3275178), 00:07:55.509 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:55.509 INFO: A corpus is not provided, starting from an empty corpus 00:07:55.509 #2 INITED exec/s: 0 rss: 67Mb 00:07:55.509 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:55.509 This may also happen if the target rejected all inputs we tried so far 00:07:55.509 [2024-11-27 15:07:20.668833] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:56.077 NEW_FUNC[1/666]: 0x43d4e8 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:56.077 NEW_FUNC[2/666]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:56.077 #208 NEW cov: 11085 ft: 11134 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:07:56.077 NEW_FUNC[1/11]: 0x130d1a8 in spdk_nvmf_request_complete /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:4797 00:07:56.077 NEW_FUNC[2/11]: 0x130d578 in spdk_thread_exec_msg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/thread.h:546 00:07:56.077 #209 NEW cov: 11228 ft: 14367 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:56.336 NEW_FUNC[1/1]: 0x1c14158 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:56.336 #215 NEW cov: 11245 ft: 14799 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:56.336 #221 NEW cov: 11245 ft: 15942 corp: 5/129b lim: 32 exec/s: 221 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:56.595 #222 NEW cov: 11245 ft: 16369 corp: 6/161b lim: 32 exec/s: 222 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:07:56.855 #223 NEW cov: 11245 ft: 16446 corp: 7/193b lim: 32 exec/s: 223 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:07:56.855 #224 NEW cov: 11245 ft: 16631 corp: 8/225b lim: 32 exec/s: 224 rss: 76Mb L: 32/32 MS: 1 CrossOver- 00:07:57.115 #230 NEW cov: 11245 ft: 16861 corp: 9/257b lim: 32 exec/s: 230 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:07:57.374 #231 NEW cov: 11252 ft: 17184 corp: 10/289b lim: 32 exec/s: 231 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:57.374 #242 NEW cov: 11252 ft: 17513 corp: 11/321b lim: 32 exec/s: 121 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:57.374 #242 DONE cov: 11252 ft: 17513 corp: 11/321b lim: 32 exec/s: 121 rss: 76Mb 00:07:57.374 Done 242 runs in 2 second(s) 00:07:57.635 [2024-11-27 15:07:22.728784] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:57.635 15:07:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:57.635 15:07:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:57.635 15:07:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:57.635 15:07:22 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:57.635 15:07:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:57.635 15:07:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:57.635 15:07:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:57.635 15:07:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:57.635 15:07:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:57.635 15:07:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:57.635 15:07:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:57.635 15:07:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:57.635 15:07:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:57.635 15:07:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:57.635 15:07:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:57.635 15:07:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:57.635 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:57.635 15:07:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:57.635 15:07:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:57.635 15:07:22 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:57.895 [2024-11-27 15:07:22.992844] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:07:57.895 [2024-11-27 15:07:22.992917] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2378818 ] 00:07:57.895 [2024-11-27 15:07:23.072294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.895 [2024-11-27 15:07:23.111733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.155 INFO: Running with entropic power schedule (0xFF, 100). 00:07:58.155 INFO: Seed: 3279078860 00:07:58.155 INFO: Loaded 1 modules (386935 inline 8-bit counters): 386935 [0x2c2f28c, 0x2c8da03), 00:07:58.155 INFO: Loaded 1 PC tables (386935 PCs): 386935 [0x2c8da08,0x3275178), 00:07:58.155 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:58.155 INFO: A corpus is not provided, starting from an empty corpus 00:07:58.155 #2 INITED exec/s: 0 rss: 67Mb 00:07:58.155 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:58.155 This may also happen if the target rejected all inputs we tried so far 00:07:58.155 [2024-11-27 15:07:23.355162] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:58.155 [2024-11-27 15:07:23.406650] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:58.155 [2024-11-27 15:07:23.406688] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:58.674 NEW_FUNC[1/678]: 0x43dee8 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:58.674 NEW_FUNC[2/678]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:58.674 #61 NEW cov: 11223 ft: 11188 corp: 2/14b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 4 CopyPart-CrossOver-ChangeBinInt-InsertRepeatedBytes- 00:07:58.674 [2024-11-27 15:07:23.871350] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:58.674 [2024-11-27 15:07:23.871394] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:58.674 #62 NEW cov: 11237 ft: 14418 corp: 3/27b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ChangeBit- 00:07:58.933 [2024-11-27 15:07:24.049777] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:58.933 [2024-11-27 15:07:24.049809] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:58.933 NEW_FUNC[1/1]: 0x1c14158 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:07:58.933 #63 NEW cov: 11254 ft: 15072 corp: 4/40b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 ChangeBit- 00:07:58.933 [2024-11-27 15:07:24.222974] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:58.933 [2024-11-27 15:07:24.223004] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:59.192 #64 NEW cov: 11254 ft: 15649 corp: 5/53b lim: 13 exec/s: 64 rss: 77Mb L: 13/13 MS: 1 ChangeByte- 00:07:59.192 [2024-11-27 15:07:24.387262] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:59.192 [2024-11-27 15:07:24.387292] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:59.192 #70 NEW cov: 11254 ft: 16693 corp: 6/66b lim: 13 exec/s: 70 rss: 77Mb L: 13/13 MS: 1 CMP- DE: "\001\000 \000\020`\320\364"- 00:07:59.451 [2024-11-27 15:07:24.552377] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:59.451 [2024-11-27 15:07:24.552408] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:59.451 #76 NEW cov: 11254 ft: 16884 corp: 7/79b lim: 13 exec/s: 76 rss: 77Mb L: 13/13 MS: 1 ChangeByte- 00:07:59.451 [2024-11-27 15:07:24.720347] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:59.451 [2024-11-27 15:07:24.720378] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:59.710 #77 NEW cov: 11254 ft: 16933 corp: 8/92b lim: 13 exec/s: 77 rss: 77Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:59.710 [2024-11-27 15:07:24.884620] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:59.710 [2024-11-27 15:07:24.884662] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:59.710 #78 NEW cov: 11254 ft: 17051 corp: 9/105b lim: 13 exec/s: 78 rss: 77Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:59.969 [2024-11-27 15:07:25.052574] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:59.969 [2024-11-27 15:07:25.052612] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:59.969 #79 NEW cov: 11261 ft: 17282 corp: 10/118b lim: 13 exec/s: 79 rss: 77Mb L: 13/13 MS: 1 ChangeBit- 00:07:59.969 [2024-11-27 15:07:25.224544] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:59.969 [2024-11-27 15:07:25.224575] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:00.228 #80 NEW cov: 11261 ft: 17708 corp: 11/131b lim: 13 exec/s: 40 rss: 77Mb L: 13/13 MS: 1 ChangeBit- 00:08:00.228 #80 DONE cov: 11261 ft: 17708 corp: 11/131b lim: 13 exec/s: 40 rss: 77Mb 00:08:00.228 ###### Recommended dictionary. ###### 00:08:00.228 "\001\000 \000\020`\320\364" # Uses: 0 00:08:00.228 ###### End of recommended dictionary. ###### 00:08:00.228 Done 80 runs in 2 second(s) 00:08:00.228 [2024-11-27 15:07:25.343798] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:08:00.228 15:07:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:08:00.228 15:07:25 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:00.228 15:07:25 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:00.228 15:07:25 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:00.228 15:07:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:08:00.228 15:07:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:00.228 15:07:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:00.228 15:07:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:00.228 15:07:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:08:00.228 15:07:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:08:00.228 15:07:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:08:00.228 15:07:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:08:00.228 15:07:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:00.228 15:07:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:00.228 15:07:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:00.487 15:07:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:08:00.487 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:00.487 15:07:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:00.487 15:07:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:00.487 15:07:25 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:08:00.487 [2024-11-27 15:07:25.602506] Starting SPDK v25.01-pre git sha1 2e10c84c8 / DPDK 24.03.0 initialization... 00:08:00.487 [2024-11-27 15:07:25.602580] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379355 ] 00:08:00.487 [2024-11-27 15:07:25.682134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.487 [2024-11-27 15:07:25.721785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.746 INFO: Running with entropic power schedule (0xFF, 100). 00:08:00.746 INFO: Seed: 1595086430 00:08:00.746 INFO: Loaded 1 modules (386935 inline 8-bit counters): 386935 [0x2c2f28c, 0x2c8da03), 00:08:00.746 INFO: Loaded 1 PC tables (386935 PCs): 386935 [0x2c8da08,0x3275178), 00:08:00.746 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:00.746 INFO: A corpus is not provided, starting from an empty corpus 00:08:00.746 #2 INITED exec/s: 0 rss: 68Mb 00:08:00.746 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:00.746 This may also happen if the target rejected all inputs we tried so far 00:08:00.746 [2024-11-27 15:07:25.963325] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:08:00.746 [2024-11-27 15:07:26.014640] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:00.746 [2024-11-27 15:07:26.014676] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:01.263 NEW_FUNC[1/678]: 0x43ebd8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:08:01.263 NEW_FUNC[2/678]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:01.263 #36 NEW cov: 11195 ft: 11178 corp: 2/10b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 4 InsertByte-InsertRepeatedBytes-CrossOver-CMP- DE: "\001 "- 00:08:01.263 [2024-11-27 15:07:26.483232] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:01.263 [2024-11-27 15:07:26.483274] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:01.522 #47 NEW cov: 11226 ft: 14184 corp: 3/19b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:08:01.522 [2024-11-27 15:07:26.679856] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:01.522 [2024-11-27 15:07:26.679887] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:01.522 NEW_FUNC[1/1]: 0x1c14158 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:01.522 #48 NEW cov: 11246 ft: 15803 corp: 4/28b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:01.781 [2024-11-27 15:07:26.875144] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:01.781 [2024-11-27 15:07:26.875177] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:01.781 #54 NEW cov: 11246 ft: 16344 corp: 5/37b lim: 9 exec/s: 54 rss: 76Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:01.781 [2024-11-27 15:07:27.057643] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:01.781 [2024-11-27 15:07:27.057684] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:02.040 #55 NEW cov: 11246 ft: 17259 corp: 6/46b lim: 9 exec/s: 55 rss: 76Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:02.040 [2024-11-27 15:07:27.240909] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:02.040 [2024-11-27 15:07:27.240939] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:02.040 #61 NEW cov: 11246 ft: 17356 corp: 7/55b lim: 9 exec/s: 61 rss: 76Mb L: 9/9 MS: 1 PersAutoDict- DE: "\001 "- 00:08:02.299 [2024-11-27 15:07:27.422819] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:02.299 [2024-11-27 15:07:27.422849] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:02.299 #65 NEW cov: 11246 ft: 17673 corp: 8/64b lim: 9 exec/s: 65 rss: 76Mb L: 9/9 MS: 4 CrossOver-CrossOver-CMP-CopyPart- DE: "\001\000\000\020"- 00:08:02.299 [2024-11-27 15:07:27.607430] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:02.299 [2024-11-27 15:07:27.607461] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:02.557 #71 NEW cov: 11246 ft: 17846 corp: 9/73b lim: 9 exec/s: 71 rss: 76Mb L: 9/9 MS: 1 ChangeByte- 00:08:02.557 [2024-11-27 15:07:27.791395] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:02.557 [2024-11-27 15:07:27.791425] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:02.817 #72 NEW cov: 11253 ft: 18223 corp: 10/82b lim: 9 exec/s: 72 rss: 77Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:02.817 [2024-11-27 15:07:27.977828] vfio_user.c:3110:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:02.817 [2024-11-27 15:07:27.977861] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:02.817 #73 NEW cov: 11253 ft: 18265 corp: 11/91b lim: 9 exec/s: 36 rss: 77Mb L: 9/9 MS: 1 CopyPart- 00:08:02.817 #73 DONE cov: 11253 ft: 18265 corp: 11/91b lim: 9 exec/s: 36 rss: 77Mb 00:08:02.817 ###### Recommended dictionary. ###### 00:08:02.817 "\001 " # Uses: 4 00:08:02.817 "\001\000\000\020" # Uses: 0 00:08:02.817 ###### End of recommended dictionary. ###### 00:08:02.817 Done 73 runs in 2 second(s) 00:08:02.817 [2024-11-27 15:07:28.105784] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:08:03.075 15:07:28 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:08:03.075 15:07:28 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:03.075 15:07:28 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:03.075 15:07:28 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:08:03.075 00:08:03.075 real 0m19.269s 00:08:03.076 user 0m27.160s 00:08:03.076 sys 0m1.854s 00:08:03.076 15:07:28 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.076 15:07:28 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:03.076 ************************************ 00:08:03.076 END TEST vfio_llvm_fuzz 00:08:03.076 ************************************ 00:08:03.076 00:08:03.076 real 1m22.889s 00:08:03.076 user 2m7.155s 00:08:03.076 sys 0m9.204s 00:08:03.076 15:07:28 llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.076 15:07:28 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:03.076 ************************************ 00:08:03.076 END TEST llvm_fuzz 00:08:03.076 ************************************ 00:08:03.076 15:07:28 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:08:03.076 15:07:28 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:08:03.076 15:07:28 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:08:03.076 15:07:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:03.076 15:07:28 -- common/autotest_common.sh@10 -- # set +x 00:08:03.335 15:07:28 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:08:03.335 15:07:28 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:08:03.335 15:07:28 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:08:03.335 15:07:28 -- common/autotest_common.sh@10 -- # set +x 00:08:09.901 INFO: APP EXITING 00:08:09.901 INFO: killing all VMs 00:08:09.901 INFO: killing vhost app 00:08:09.901 INFO: EXIT DONE 00:08:12.435 Waiting for block devices as requested 00:08:12.694 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:12.694 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:12.694 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:12.953 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:12.953 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:12.953 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:12.953 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:13.213 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:13.213 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:13.213 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:13.472 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:13.472 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:13.472 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:13.731 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:13.731 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:13.731 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:13.991 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:08:17.309 Cleaning 00:08:17.309 Removing: /dev/shm/spdk_tgt_trace.pid2351369 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2348926 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2350111 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2351369 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2351845 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2352931 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2353062 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2354070 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2354081 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2354513 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2354844 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2355163 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2355505 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2355831 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2355995 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2356151 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2356467 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2357309 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2360469 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2360727 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2360877 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2361046 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2361431 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2361605 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2362091 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2362219 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2362513 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2362527 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2362817 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2362827 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2363476 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2363691 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2363904 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2364233 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2365220 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2365698 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2366232 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2366563 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2367057 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2367556 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2367874 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2368403 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2368850 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2369233 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2369762 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2370144 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2370585 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2371123 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2371413 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2371943 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2372380 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2372758 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2373297 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2373613 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2374118 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2374586 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2374933 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2375468 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2375855 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2376466 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2376922 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2377460 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2377978 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2378330 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2378818 00:08:17.310 Removing: /var/run/dpdk/spdk_pid2379355 00:08:17.310 Clean 00:08:17.569 15:07:42 -- common/autotest_common.sh@1453 -- # return 0 00:08:17.569 15:07:42 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:08:17.569 15:07:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:17.569 15:07:42 -- common/autotest_common.sh@10 -- # set +x 00:08:17.569 15:07:42 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:08:17.569 15:07:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:17.569 15:07:42 -- common/autotest_common.sh@10 -- # set +x 00:08:17.569 15:07:42 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:17.569 15:07:42 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:17.569 15:07:42 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:17.569 15:07:42 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:08:17.569 15:07:42 -- spdk/autotest.sh@398 -- # hostname 00:08:17.569 15:07:42 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-20 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:08:17.828 geninfo: WARNING: invalid characters removed from testname! 00:08:23.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:08:23.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:08:28.532 15:07:53 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:36.652 15:08:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:40.844 15:08:05 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:46.116 15:08:11 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:51.388 15:08:16 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:56.660 15:08:21 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:01.933 15:08:27 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:09:01.933 15:08:27 -- spdk/autorun.sh@1 -- $ timing_finish 00:09:01.933 15:08:27 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt ]] 00:09:01.933 15:08:27 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:09:01.933 15:08:27 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:09:01.933 15:08:27 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:01.933 + [[ -n 2239225 ]] 00:09:01.933 + sudo kill 2239225 00:09:01.942 [Pipeline] } 00:09:01.956 [Pipeline] // stage 00:09:01.962 [Pipeline] } 00:09:01.975 [Pipeline] // timeout 00:09:01.981 [Pipeline] } 00:09:01.994 [Pipeline] // catchError 00:09:01.999 [Pipeline] } 00:09:02.016 [Pipeline] // wrap 00:09:02.022 [Pipeline] } 00:09:02.036 [Pipeline] // catchError 00:09:02.047 [Pipeline] stage 00:09:02.050 [Pipeline] { (Epilogue) 00:09:02.063 [Pipeline] catchError 00:09:02.065 [Pipeline] { 00:09:02.079 [Pipeline] echo 00:09:02.081 Cleanup processes 00:09:02.086 [Pipeline] sh 00:09:02.374 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:02.374 2387750 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:02.387 [Pipeline] sh 00:09:02.668 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:02.669 ++ grep -v 'sudo pgrep' 00:09:02.669 ++ awk '{print $1}' 00:09:02.669 + sudo kill -9 00:09:02.669 + true 00:09:02.680 [Pipeline] sh 00:09:02.964 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:09:02.964 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:02.964 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:04.341 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:16.575 [Pipeline] sh 00:09:16.860 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:09:16.860 Artifacts sizes are good 00:09:16.875 [Pipeline] archiveArtifacts 00:09:16.882 Archiving artifacts 00:09:17.077 [Pipeline] sh 00:09:17.426 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:09:17.442 [Pipeline] cleanWs 00:09:17.454 [WS-CLEANUP] Deleting project workspace... 00:09:17.454 [WS-CLEANUP] Deferred wipeout is used... 00:09:17.460 [WS-CLEANUP] done 00:09:17.462 [Pipeline] } 00:09:17.484 [Pipeline] // catchError 00:09:17.496 [Pipeline] sh 00:09:17.779 + logger -p user.info -t JENKINS-CI 00:09:17.787 [Pipeline] } 00:09:17.801 [Pipeline] // stage 00:09:17.807 [Pipeline] } 00:09:17.822 [Pipeline] // node 00:09:17.827 [Pipeline] End of Pipeline 00:09:18.013 Finished: SUCCESS